poky: subtree update:835f7eac06..20946c63c2
Aaron Chan (1):
python3-dbus: Add native and nativesdk variants
Adrian Bunk (8):
gnome: Remove the gnome class
bind: Remove RECIPE_NO_UPDATE_REASON and follow the ESV releases
webkitgtk: Reenable on mips
mtd-utils: Upgrade to 2.1.1
Change ftp:// URIs to http(s)://
webkitgtk: Stop disabling gold on aarch64 and mips
grub/libmpc/gdb: Use GNU_MIRROR in more recipes
screen: Backport fix for an implicit function declaration
Alexander Kanavin (28):
btrfs-tools: update 5.1.1 -> 5.2.1
libmodulemd: update to 2.6.0
libwebp: upgrade 1.0.2 -> 1.0.3
createrepo-c: upgrade 0.14.2 -> 0.14.3
webkitgtk: upgrade 2.24.2 -> 2.24.3
bzip2: fix upstream version check
stress-ng: add a recipe that replaces the original stress
meson: update 0.50.1 -> 0.51.1
meson.bbclass: do not pass native compiler/linker flags via command line
meson: add a backported patch to address vala cross-compilation errors
libedit: fix upstream verison check
maintainers.inc: assign acpica to Ross
stress-ng: add a patch to remove unneeded bash dependency
elfutils: use PRIVATE_LIBS for the ptest package
apt: add a missing perl runtime dependency
attr: add a missing perl runtime dependency
ofono: correct the python3 runtime dependency
bluez5: correct the python3 runtime dependency
local.conf.sample: do not add sdl to nativesdk qemu config
maintainers.inc: give python recipes to Oleksandr Kravchuk
python-numpy: remove the python 2.x version of the recipe
python-scons: remove the python 2.x version of the recipe
python-nose: remove the python 2.x version of the recipe
lib/oeqa/utils/qemurunner.py: add runqemuparams after kvm/nographic/snapshot/slirp
mesa: enable glx-tls option in native and nativesdk builds
insane.bbclass: in file-rdeps do not look into RDEPENDS recursively
sudo: correct SRC_URI
ovmf: fix upstream version check
Andreas Obergschwandtner (1):
bzip2: set the autoconf package version to the recipe version
Anuj Mittal (11):
mpg123: upgrade 1.25.10 -> 1.25.11
libsdl: remove
pulseaudio: don't include consolekit when systemd is enabled
libsdl2: upgrade 2.0.9 -> 2.0.10
grub: upgrade 2.02 -> 2.04
patch: fix CVE-2019-13636
python: fix CVE-2018-20852
python: CVE-2019-9947 is same as CVE-2019-9740
libtasn1: upgrade 4.13 -> 4.14
pango: upgrade 1.42.4 -> 1.44.3
harfbuzz: upgrade 2.4.0 -> 2.5.3
Bartosz Golaszewski (1):
qemu: add a patch fixing the native build on newer kernels
Bedel, Alban (3):
rng-tools: start rngd early in the boot process again
kernel-uboot: remove useless special casing of arm64 Image
boost: Fix build and enable context and coroutines on aarch64
Bruce Ashfield (2):
linux-yocto/4.19: update to v4.19.61
linux-yocto-dev: bump to 5.3-rcX
Changqing Li (6):
runqemu: add lockfile for port used when slirp enabled
runqemu: fix get portlock fail for multi users
qemuboot-x86: move QB_SYSTEM_NAME to corresponding conf
genericx86-64.conf/genericx86.conf: add QB_SYSTEM_NAME
grub/grub-efi: fix conflict for aach64
go-runtime: remove conflict files from -dev packages
Chen Qi (1):
sudo: use nonarch_libdir instead of libdir for tmpfiles.d
Chin Huat Ang (1):
cve-update-db-native: fix https proxy issues
Chris Laplante via bitbake-devel (1):
bitbake: fetch2/wget: avoid 'maximum recursion depth' RuntimeErrors when handling 403 codes
Daniel Ammann (2):
image_types: Remove remnants of hdddirect
bitbake: toaster: Sync list of fs_types with oe-core
Denys Dmytriyenko (2):
wayland-protocols: upgrade 1.17 -> 1.18
weston: upgrade 6.0.0 -> 6.0.1
Diego Rondini (1):
image_types.bbclass: make gzipped images rsyncable
Dmitry Eremin-Solenikov (1):
kernel.bbclass: fix installation of modules signing certificates
Frederic Ouellet (1):
systemd: Add partial support of drop-in configuration files to systemd-systemctl-native
Hongxu Jia (1):
grub: add grub-native
Jason Wessel (6):
sqlite3: Fix zlib determinism problem
pseudo: Fix openat() with a symlink pointing to a directory
image_types_wic.bbclass: Copy the .wks and .env files to deploy image dir
wic: Add partition type for msdos partition tables
wic: Make disk partition size consistently computed
dpkg: Provide update-alternative for start-stop-daemon
Johann Fridriksson (1):
ruby: Adding zlib-native to native dependencies
Joshua Lock via Openembedded-core (3):
sstate: fix log message
classes/sstate: don't use unsigned sstate when verification enabled
classes/sstate: regenerate sstate when signing enabled
Joshua Watt (1):
bitbake: hashserv: SQL Optimizations
Kai Kang (3):
subversion: add packageconfig boost
epiphany: set imcompatible with tune mips
e2fsprogs: 1.44.5 -> 1.45.3
Khem Raj (23):
strace: Upgrade to 5.2
linux-libc-header: Fix ptrace.h and prctl.h conflict on aarch64
libnss-nis: Fix build with glibc 2.30
lttng-ust: Check for gettid libc API
ltp: Fix build with glibc 2.30
lttng-tools: Fix build with glibc 2.30
xserver-xorg: Backport patch to remove using sys/io.h
Apache-2.0-with-LLVM-exception: Add new license file
libedit: Move from meta-oe
groff: Fix math.h inclusion from system headers issue
webkitgtk: Fix compile failures with clang
glibc: Update to glibc 2.30
virglrender: Fix endianness check on musl
syslinux: Override hardcoded toolnames in Makefile
systemd-boot: Add option to specify cross objcopy and use it
mesa,llvm,meson: Update llvm to 8.0.1 plus define and use LLVM version globally
musl: Update to master tip
oeqa/buildgalculator.py: Add dependency on gtk+3
oeqa/parselogs: grep for exact errors list keywords
gcc-runtime: Move content from gcclibdir into libdir
gdb: Do not set musl specific CFLAGS
linuxloader: Add entries for riscv64
musl: Delete GLIBC_LDSO before creating symlink with lnr
Luca Boccassi (1):
python3-pygobject: remove python3-setuptools from RDEPENDS
Mads Andreasen (1):
bitbake: fetch2/npm: Use npm pack to download node modules instead of wget
Mark Hatle (2):
glibc-package.inc: Add linux-libc-headers-dev to glibc-dev
bitbake: layerindexlib: Fix parsing of recursive layer dependencies
Martin Jansa (3):
icecc.bbclass: catch subprocess.CalledProcessError
powertop: import a fix from buildroot
meson: backport fix for builds with -Werror=return-type
Ming Liu (5):
libx11-compose-data: add recipe
libxkbcommon: RDEPENDS on libx11 compose data
weston: change to use meson build system
license_image.bbclass: drop invalid comments
opensbi: handle deploy task under sstate
Naveen Saini (2):
gdk-pixbuf: enable x11 PACKAGECONFIG option
image_types_wic: add syslinux-native dependency conditional
Oleksandr Kravchuk (17):
python3-pip: update to 19.2.1
python3-git: update to 2.1.12
ethtool: update to 5.2
python3-git: update to 2.1.13
xorgproto: update to 2019.1
xserver-xorg: update to 1.20.5
ell: update to 0.21
libinput: update to 1.14.0
wpa-supplicant: update to 2.9
aspell: update to 0.60.7
linux-firmware: add PE back
xf86-input-libinput: update to 0.29.0
git: update to 2.22.1
xrandr: update to 1.5.1
python3-git: update to 3.0.0
librepo: update to 1.10.5
libevent: update to 2.1.11
Pascal Bach (2):
cmake: 3.14.5 -> 3.15.1
cmake: 3.15.1 -> 3.15.2
Paul Eggleton (2):
scripts/create-pull-request: improve handling of non-SSH remote URLs
scripts/create-pull-request: fix putting subject containing / into cover letter
Piotr Tworek (2):
pulseaudio: Backport upstream fix new alsa compatibility.
libdrm: Move amdgpu.ids file into libdrm-amdgpu package.
Randy MacLeod (1):
ptest-runner: update from 2.3.1 to 2.3.2
Rasmus Villemoes (1):
iproute2: drop pointless configure-cross.patch
Ricardo Neri (5):
ovmf: Update to version edk2-stable201905
ovmf: Set PV
ovmf: Use HOSTTOOLS' python3
ovmf: Generate test Platform key and first Key Exchange Key
runqemu: Add support to handle EnrollDefaultKeys PK/KEK1 certificate
Ricardo Ribalda Delgado (2):
packagegroup-core-base-utils: Make it machine specific
inetutils: Fix abort on invalid files
Richard Purdie (50):
package: Improve determinism
sstate: Reduce race windows
bitbake: siggen: Import unihash code from OE-Core
bitbake: cache: Add SimpleCache class
bitbake: runqueue: Improve scenequeue processing logic
bitbake: siggen: Add new unitaskhashes data variable which is cached
bitbake: siggen: Convert to use self.unitaskhashes
bitbake: runqueue: Enable dynamic task adjustment to hash equivalency
bitbake: runqueue: Improve determinism
bitbake: cooker/hashserv: Allow autostarting of a local hash server using BB_HASHSERVE
bitbake: hashserv: Turn off sqlite synchronous mode
bitbake: prserv: Use a memory journal
bitbake: hashserv: Use separate threads for answering requests and handling them
bitbake: hashserv: Switch from threads to multiprocessing
bitbake: runqueue: Clean up BB_HASHCHECK_FUNCTION API
bitbake: siggen: Clean up task reference formats
bitbake: build/utils: Drop bb.build.FuncFailed
bitbake: tests/runqueue: Add hashserv+runqueue test
bitbake: bitbake: Bump version to 1.43.1 for API changes
sanity.conf: Require bitbake 1.43.1
classes/lib: Remove bb.build.FuncFailed
sstatesig: Move unihash siggen code to bitbake
sstatesig: Add debug for incorrect hash server settings
sstatesig: Adpat to recent bitbake hash equiv runqueue changes
sstatesig: Update to handle BB_HASHSERVE
sstate/sstatesig: Update to new form of BB_HASHCHECK_FUNCTION
sstatesig: Updates to match bitbake siggen changes
gstreamer: Add fix for glibc 2.30
sstatesig: Fix leftover splitting issue from siggen change
python3-pygobject: Add missing pkgutil RDEPENDS
bitbake: runqueue: Fix corruption issue
bitbake: runqueue: Improve setscene task handling logic
bitbake: tests/runqueue: Add further hash equivalence tests
bitbake: cooker: Improve hash server startup code to avoid exit tracebacks
bitbake: runqueue: Wait for covered tasks to complete before trying setscene
bitbake: runqueue: Fix next_buildable_task performance problem
bitbake: runqueue: Improve scenequeue debugging
bitbake: runqueue: Recompute holdoff tasks from scratch
bitbake: runqueue: Fix event timing race
bitbake: runqueue: Drop debug statement causing performance issues
bitbake: runqueue: Add further debug information
bitbake: runqueue: Add missing setscene task corner case
bitbake: runqueue: Ensure we clear the stamp cache
poky: Retire opensuse 42.3 from SANITY_TESTED_DISTROS
gcc-cross-canadian: Drop obsolete shlibs exclusion
bitbake: tests/runqueue: Fix tests
bitbake: runqueue: Fix data corruption problem
bitbake: runqueue: Ensure data is handled correctly
bitbake: hashserv: Ensure we don't accumulate sockets in TIME_WAIT state
bitbake: runqueue: Ensure target_tids is filtered
Robert Yang (3):
bitbake: cooker: Cleanup the queue before call process.join()
bitbake: knotty: Fix for the Second Keyboard Interrupt
bitbake: bitbake: server/process: Handle BBHandledException to avoid unexpected exceptions
Ross Burton (23):
libidn2: remove build paths from libidn2.pc
gnutls: don't use HOSTTOOLS_DIR/bash as a shell on target
libical: upgrade to 3.0.5
perl: fix whitespace
perl: add PACKAGECONFIG for db
fortran-helloworld: neaten recipe
python3: remove empty python3-distutils-staticdev
python3: support recommends in manifest
python3: split out the Windows distutils installer stubs
insane: check if the recipe incorrectly uses DEPENDS_${PN}
libxx86misc: remove this now redundant library
xserver-xorg: clean up xorgproto dependencies
xserver-xorg: add PACKAGECONFIG for DGA
xdpyinfo: don't depend on DGA
libxx86dga: remove obsolete client libary
xserver-xorg: remove embedded build path in the source
libx11: update to 1.6.8
sanity: update for new bb.build.exec_func() behaviour
libx11-diet: remove
qemu: fix patch Upstream-Status
xserver-xorg: refresh build path removal patch
waffle: upgrade 1.5.2 -> 1.6.0
libx11: replace libtool patch with upstreamed patch
Tim Blechmann (1):
deb: allow custom dpkg command
Trevor Gamblin (2):
gzip: update ptest package dependencies
patch: fix CVE-2019-13638
Wenlin Kang (1):
db: add switch for building database verification
Will Page (1):
uboot: fixes to uboot-extlinux-config attribute values
William Bourque (1):
meta/lib/oeqa: Remove ext4 for bootimg-biosplusefi
Yi Zhao (1):
libx11-compose-data: upgrade 1.6.7 -> 1.6.8
Yuan Chao (4):
glib-2.0:upgrade 2.60.5 -> 2.60.6
nettle:upgrade 3.4.1 -> 3.5.1
python3-pbr:upgrade 5.4.1 -> 5.4.2
gpgme:upgrade 1.13.0 -> 1.13.1
Zang Ruochen (8):
msmtp: upgrade 1.8.4 -> 1.8.5
curl: upgrade 7.65.2 -> 7.65.3
iso-codes: upgrade 4.2 -> 4.3
python-scons:upgrade 3.0.5 -> 3.1.0
libgudev:upgrade 232 -> 233
libglu:upgrade 9.0.0 -> 9.0.1
man-db:upgrade 2.8.5 -> 2.8.6.1
libnewt:upgrade 0.52.20 -> 0.52.21
Zheng Ruoqin (1):
python3-mako: 1.0.14 -> 1.1.0
Zoltan Kuscsik (1):
kmscube: update to latest revision
Change-Id: I2cd1a0d59da46725b1aba5a79b63eb6121b3c79e
Signed-off-by: Brad Bishop <bradleyb@fuzziesquirrel.com>
diff --git a/poky/bitbake/lib/bb/__init__.py b/poky/bitbake/lib/bb/__init__.py
index 7579a89..322a1e0 100644
--- a/poky/bitbake/lib/bb/__init__.py
+++ b/poky/bitbake/lib/bb/__init__.py
@@ -9,7 +9,7 @@
# SPDX-License-Identifier: GPL-2.0-only
#
-__version__ = "1.43.0"
+__version__ = "1.43.1"
import sys
if sys.version_info < (3, 4, 0):
diff --git a/poky/bitbake/lib/bb/build.py b/poky/bitbake/lib/bb/build.py
index e2f91fa..30a2ba2 100644
--- a/poky/bitbake/lib/bb/build.py
+++ b/poky/bitbake/lib/bb/build.py
@@ -54,23 +54,6 @@
builtins['bb'] = bb
builtins['os'] = os
-class FuncFailed(Exception):
- def __init__(self, name = None, logfile = None):
- self.logfile = logfile
- self.name = name
- if name:
- self.msg = 'Function failed: %s' % name
- else:
- self.msg = "Function failed"
-
- def __str__(self):
- if self.logfile and os.path.exists(self.logfile):
- msg = ("%s (log file is located at %s)" %
- (self.msg, self.logfile))
- else:
- msg = self.msg
- return msg
-
class TaskBase(event.Event):
"""Base class for task events"""
@@ -189,12 +172,7 @@
return sys.stdout.name
-#
-# pythonexception allows the python exceptions generated to be raised
-# as the real exceptions (not FuncFailed) and without a backtrace at the
-# origin of the failure.
-#
-def exec_func(func, d, dirs = None, pythonexception=False):
+def exec_func(func, d, dirs = None):
"""Execute a BB 'function'"""
try:
@@ -266,7 +244,7 @@
with bb.utils.fileslocked(lockfiles):
if ispython:
- exec_func_python(func, d, runfile, cwd=adir, pythonexception=pythonexception)
+ exec_func_python(func, d, runfile, cwd=adir)
else:
exec_func_shell(func, d, runfile, cwd=adir)
@@ -286,7 +264,7 @@
{function}(d)
"""
logformatter = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
-def exec_func_python(func, d, runfile, cwd=None, pythonexception=False):
+def exec_func_python(func, d, runfile, cwd=None):
"""Execute a python BB 'function'"""
code = _functionfmt.format(function=func)
@@ -311,14 +289,7 @@
bb.methodpool.insert_method(func, text, fn, lineno - 1)
comp = utils.better_compile(code, func, "exec_python_func() autogenerated")
- utils.better_exec(comp, {"d": d}, code, "exec_python_func() autogenerated", pythonexception=pythonexception)
- except (bb.parse.SkipRecipe, bb.build.FuncFailed):
- raise
- except Exception as e:
- if pythonexception:
- raise
- logger.error(str(e))
- raise FuncFailed(func, None)
+ utils.better_exec(comp, {"d": d}, code, "exec_python_func() autogenerated")
finally:
bb.debug(2, "Python function %s finished" % func)
@@ -475,13 +446,8 @@
with open(fifopath, 'r+b', buffering=0) as fifo:
try:
bb.debug(2, "Executing shell function %s" % func)
-
- try:
- with open(os.devnull, 'r+') as stdin, logfile:
- bb.process.run(cmd, shell=False, stdin=stdin, log=logfile, extrafiles=[(fifo,readfifo)])
- except bb.process.CmdError:
- logfn = d.getVar('BB_LOGFILE')
- raise FuncFailed(func, logfn)
+ with open(os.devnull, 'r+') as stdin, logfile:
+ bb.process.run(cmd, shell=False, stdin=stdin, log=logfile, extrafiles=[(fifo,readfifo)])
finally:
os.unlink(fifopath)
@@ -609,9 +575,6 @@
event.fire(TaskStarted(task, logfn, flags, localdata), localdata)
except (bb.BBHandledException, SystemExit):
return 1
- except FuncFailed as exc:
- logger.error(str(exc))
- return 1
try:
for func in (prefuncs or '').split():
@@ -619,7 +582,10 @@
exec_func(task, localdata)
for func in (postfuncs or '').split():
exec_func(func, localdata)
- except FuncFailed as exc:
+ except bb.BBHandledException:
+ event.fire(TaskFailed(task, logfn, localdata, True), localdata)
+ return 1
+ except Exception as exc:
if quieterr:
event.fire(TaskFailedSilent(task, logfn, localdata), localdata)
else:
@@ -627,9 +593,6 @@
logger.error(str(exc))
event.fire(TaskFailed(task, logfn, localdata, errprinted), localdata)
return 1
- except bb.BBHandledException:
- event.fire(TaskFailed(task, logfn, localdata, True), localdata)
- return 1
finally:
sys.stdout.flush()
sys.stderr.flush()
diff --git a/poky/bitbake/lib/bb/cache.py b/poky/bitbake/lib/bb/cache.py
index ab18dd5..b6f7da5 100644
--- a/poky/bitbake/lib/bb/cache.py
+++ b/poky/bitbake/lib/bb/cache.py
@@ -220,7 +220,7 @@
cachedata.hashfn[fn] = self.hashfilename
for task, taskhash in self.basetaskhashes.items():
- identifier = '%s.%s' % (fn, task)
+ identifier = '%s:%s' % (fn, task)
cachedata.basetaskhash[identifier] = taskhash
cachedata.inherits[fn] = self.inherits
@@ -883,3 +883,56 @@
p.dump([data, self.__class__.CACHE_VERSION])
bb.utils.unlockfile(glf)
+
+
+class SimpleCache(object):
+ """
+ BitBake multi-process cache implementation
+
+ Used by the codeparser & file checksum caches
+ """
+
+ def __init__(self, version):
+ self.cachefile = None
+ self.cachedata = None
+ self.cacheversion = version
+
+ def init_cache(self, d, cache_file_name=None, defaultdata=None):
+ cachedir = (d.getVar("PERSISTENT_DIR") or
+ d.getVar("CACHE"))
+ if not cachedir:
+ return defaultdata
+
+ bb.utils.mkdirhier(cachedir)
+ self.cachefile = os.path.join(cachedir,
+ cache_file_name or self.__class__.cache_file_name)
+ logger.debug(1, "Using cache in '%s'", self.cachefile)
+
+ glf = bb.utils.lockfile(self.cachefile + ".lock")
+
+ try:
+ with open(self.cachefile, "rb") as f:
+ p = pickle.Unpickler(f)
+ data, version = p.load()
+ except:
+ bb.utils.unlockfile(glf)
+ return defaultdata
+
+ bb.utils.unlockfile(glf)
+
+ if version != self.cacheversion:
+ return defaultdata
+
+ return data
+
+ def save(self, data):
+ if not self.cachefile:
+ return
+
+ glf = bb.utils.lockfile(self.cachefile + ".lock")
+
+ with open(self.cachefile, "wb") as f:
+ p = pickle.Pickler(f, -1)
+ p.dump([data, self.cacheversion])
+
+ bb.utils.unlockfile(glf)
diff --git a/poky/bitbake/lib/bb/cooker.py b/poky/bitbake/lib/bb/cooker.py
index b4851e1..0607fcc 100644
--- a/poky/bitbake/lib/bb/cooker.py
+++ b/poky/bitbake/lib/bb/cooker.py
@@ -31,6 +31,7 @@
import json
import pickle
import codecs
+import hashserv
logger = logging.getLogger("BitBake")
collectlog = logging.getLogger("BitBake.Collection")
@@ -192,6 +193,8 @@
bb.parse.BBHandler.cached_statements = {}
self.ui_cmdline = None
+ self.hashserv = None
+ self.hashservport = None
self.initConfigurationData()
@@ -372,8 +375,6 @@
# Copy of the data store which has been expanded.
# Used for firing events and accessing variables where expansion needs to be accounted for
#
- bb.parse.init_parser(self.data)
-
if CookerFeatures.BASEDATASTORE_TRACKING in self.featureset:
self.disableDataTracking()
@@ -391,6 +392,22 @@
except prserv.serv.PRServiceConfigError as e:
bb.fatal("Unable to start PR Server, exitting")
+ if self.data.getVar("BB_HASHSERVE") == "localhost:0":
+ if not self.hashserv:
+ dbfile = (self.data.getVar("PERSISTENT_DIR") or self.data.getVar("CACHE")) + "/hashserv.db"
+ self.hashserv = hashserv.create_server(('localhost', 0), dbfile, '')
+ self.hashservport = "localhost:" + str(self.hashserv.server_port)
+ self.hashserv.process = multiprocessing.Process(target=self.hashserv.serve_forever)
+ self.hashserv.process.daemon = True
+ self.hashserv.process.start()
+ self.data.setVar("BB_HASHSERVE", self.hashservport)
+ self.databuilder.origdata.setVar("BB_HASHSERVE", self.hashservport)
+ self.databuilder.data.setVar("BB_HASHSERVE", self.hashservport)
+ for mc in self.databuilder.mcdata:
+ self.databuilder.mcdata[mc].setVar("BB_HASHSERVE", self.hashservport)
+
+ bb.parse.init_parser(self.data)
+
def enableDataTracking(self):
self.configuration.tracking = True
if hasattr(self, "data"):
@@ -1645,9 +1662,11 @@
def post_serve(self):
prserv.serv.auto_shutdown()
+ if self.hashserv:
+ self.hashserv.process.terminate()
+ self.hashserv.process.join()
bb.event.fire(CookerExit(), self.data)
-
def shutdown(self, force = False):
if force:
self.state = state.forceshutdown
@@ -1662,6 +1681,7 @@
def reset(self):
self.initConfigurationData()
+ self.handlePRServ()
def clientComplete(self):
"""Called when the client is done using the server"""
@@ -2062,6 +2082,14 @@
for process in self.processes:
self.parser_quit.put(None)
+ # Cleanup the queue before call process.join(), otherwise there might be
+ # deadlocks.
+ while True:
+ try:
+ self.result_queue.get(timeout=0.25)
+ except queue.Empty:
+ break
+
for process in self.processes:
if force:
process.join(.1)
diff --git a/poky/bitbake/lib/bb/data.py b/poky/bitbake/lib/bb/data.py
index 92ef405..0d75d0c 100644
--- a/poky/bitbake/lib/bb/data.py
+++ b/poky/bitbake/lib/bb/data.py
@@ -130,7 +130,7 @@
if all:
oval = d.getVar(var, False)
val = d.getVar(var)
- except (KeyboardInterrupt, bb.build.FuncFailed):
+ except (KeyboardInterrupt):
raise
except Exception as exc:
o.write('# expansion of %s threw %s: %s\n' % (var, exc.__class__.__name__, str(exc)))
@@ -422,7 +422,7 @@
var = lookupcache[dep]
if var is not None:
data = data + str(var)
- k = fn + "." + task
+ k = fn + ":" + task
basehash[k] = hashlib.sha256(data.encode("utf-8")).hexdigest()
taskdeps[task] = alldeps
diff --git a/poky/bitbake/lib/bb/fetch2/npm.py b/poky/bitbake/lib/bb/fetch2/npm.py
index 4427b1b..9700e61 100644
--- a/poky/bitbake/lib/bb/fetch2/npm.py
+++ b/poky/bitbake/lib/bb/fetch2/npm.py
@@ -101,11 +101,19 @@
return False
return True
- def _runwget(self, ud, d, command, quiet):
- logger.debug(2, "Fetching %s using command '%s'" % (ud.url, command))
- bb.fetch2.check_network_access(d, command, ud.url)
+ def _runpack(self, ud, d, pkgfullname: str, quiet=False) -> str:
+ """
+ Runs npm pack on a full package name.
+ Returns the filename of the downloaded package
+ """
+ bb.fetch2.check_network_access(d, pkgfullname, ud.registry)
dldir = d.getVar("DL_DIR")
- runfetchcmd(command, d, quiet, workdir=dldir)
+ dldir = os.path.join(dldir, ud.prefixdir)
+
+ command = "npm pack {} --registry {}".format(pkgfullname, ud.registry)
+ logger.debug(2, "Fetching {} using command '{}' in {}".format(pkgfullname, command, dldir))
+ filename = runfetchcmd(command, d, quiet, workdir=dldir)
+ return filename.rstrip()
def _unpackdep(self, ud, pkg, data, destdir, dldir, d):
file = data[pkg]['tgz']
@@ -163,6 +171,9 @@
pkgfullname = pkg
if version != '*' and not '/' in version:
pkgfullname += "@'%s'" % version
+ if pkgfullname in fetchedlist:
+ return
+
logger.debug(2, "Calling getdeps on %s" % pkg)
fetchcmd = "npm view %s --json --registry %s" % (pkgfullname, ud.registry)
output = runfetchcmd(fetchcmd, d, True)
@@ -182,15 +193,10 @@
if (not blacklist and 'linux' not in pkg_os) or '!linux' in pkg_os:
logger.debug(2, "Skipping %s since it's incompatible with Linux" % pkg)
return
- #logger.debug(2, "Output URL is %s - %s - %s" % (ud.basepath, ud.basename, ud.localfile))
- outputurl = pdata['dist']['tarball']
+ filename = self._runpack(ud, d, pkgfullname)
data[pkg] = {}
- data[pkg]['tgz'] = os.path.basename(outputurl)
- if outputurl in fetchedlist:
- return
-
- self._runwget(ud, d, "%s --directory-prefix=%s %s" % (self.basecmd, ud.prefixdir, outputurl), False)
- fetchedlist.append(outputurl)
+ data[pkg]['tgz'] = filename
+ fetchedlist.append(pkgfullname)
dependencies = pdata.get('dependencies', {})
optionalDependencies = pdata.get('optionalDependencies', {})
@@ -217,17 +223,12 @@
if obj == pkg:
self._getshrinkeddependencies(obj, data['dependencies'][obj], data['dependencies'][obj]['version'], d, ud, lockdown, manifest, False)
return
- outputurl = "invalid"
- if ('resolved' not in data) or (not data['resolved'].startswith('http://') and not data['resolved'].startswith('https://')):
- # will be the case for ${PN}
- fetchcmd = "npm view %s@%s dist.tarball --registry %s" % (pkg, version, ud.registry)
- logger.debug(2, "Found this matching URL: %s" % str(fetchcmd))
- outputurl = runfetchcmd(fetchcmd, d, True)
- else:
- outputurl = data['resolved']
- self._runwget(ud, d, "%s --directory-prefix=%s %s" % (self.basecmd, ud.prefixdir, outputurl), False)
+
+ pkgnameWithVersion = "{}@{}".format(pkg, version)
+ logger.debug(2, "Get dependencies for {}".format(pkgnameWithVersion))
+ filename = self._runpack(ud, d, pkgnameWithVersion)
manifest[pkg] = {}
- manifest[pkg]['tgz'] = os.path.basename(outputurl).rstrip()
+ manifest[pkg]['tgz'] = filename
manifest[pkg]['deps'] = {}
if pkg in lockdown:
diff --git a/poky/bitbake/lib/bb/fetch2/wget.py b/poky/bitbake/lib/bb/fetch2/wget.py
index 0f71ee4..725586d 100644
--- a/poky/bitbake/lib/bb/fetch2/wget.py
+++ b/poky/bitbake/lib/bb/fetch2/wget.py
@@ -257,13 +257,15 @@
fp.read()
fp.close()
- newheaders = dict((k, v) for k, v in list(req.headers.items())
- if k.lower() not in ("content-length", "content-type"))
- return self.parent.open(urllib.request.Request(req.get_full_url(),
- headers=newheaders,
- origin_req_host=req.origin_req_host,
- unverifiable=True))
+ if req.get_method() != 'GET':
+ newheaders = dict((k, v) for k, v in list(req.headers.items())
+ if k.lower() not in ("content-length", "content-type"))
+ return self.parent.open(urllib.request.Request(req.get_full_url(),
+ headers=newheaders,
+ origin_req_host=req.origin_req_host,
+ unverifiable=True))
+ raise urllib.request.HTTPError(req, code, msg, headers, None)
# Some servers (e.g. GitHub archives, hosted on Amazon S3) return 403
# Forbidden when they actually mean 405 Method Not Allowed.
diff --git a/poky/bitbake/lib/bb/runqueue.py b/poky/bitbake/lib/bb/runqueue.py
index 6a2de24..7fa074f 100644
--- a/poky/bitbake/lib/bb/runqueue.py
+++ b/poky/bitbake/lib/bb/runqueue.py
@@ -133,7 +133,7 @@
self.prio_map = [self.rqdata.runtaskentries.keys()]
- self.buildable = []
+ self.buildable = set()
self.skip_maxthread = {}
self.stamps = {}
for tid in self.rqdata.runtaskentries:
@@ -148,8 +148,10 @@
"""
Return the id of the first task we find that is buildable
"""
- self.buildable = [x for x in self.buildable if x not in self.rq.runq_running]
- buildable = [x for x in self.buildable if (x in self.rq.tasks_covered or x in self.rq.tasks_notcovered)]
+ buildable = set(self.buildable)
+ buildable.difference_update(self.rq.runq_running)
+ buildable.difference_update(self.rq.holdoff_tasks)
+ buildable.intersection_update(self.rq.tasks_covered | self.rq.tasks_notcovered)
if not buildable:
return None
@@ -167,7 +169,7 @@
skip_buildable[rtaskname] = 1
if len(buildable) == 1:
- tid = buildable[0]
+ tid = buildable.pop()
taskname = taskname_from_tid(tid)
if taskname in skip_buildable and skip_buildable[taskname] >= int(self.skip_maxthread[taskname]):
return None
@@ -204,7 +206,10 @@
return self.next_buildable_task()
def newbuildable(self, task):
- self.buildable.append(task)
+ self.buildable.add(task)
+
+ def removebuildable(self, task):
+ self.buildable.remove(task)
def describe_task(self, taskid):
result = 'ID %s' % taskid
@@ -1171,10 +1176,9 @@
def prepare_task_hash(self, tid):
procdep = []
for dep in self.runtaskentries[tid].depends:
- procdep.append(fn_from_tid(dep) + "." + taskname_from_tid(dep))
- (mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
- self.runtaskentries[tid].hash = bb.parse.siggen.get_taskhash(taskfn, taskname, procdep, self.dataCaches[mc])
- self.runtaskentries[tid].unihash = bb.parse.siggen.get_unihash(taskfn + "." + taskname)
+ procdep.append(dep)
+ self.runtaskentries[tid].hash = bb.parse.siggen.get_taskhash(tid, procdep, self.dataCaches[mc_from_tid(tid)])
+ self.runtaskentries[tid].unihash = bb.parse.siggen.get_unihash(tid)
def dump_data(self):
"""
@@ -1251,6 +1255,7 @@
"buildname" : self.cfgData.getVar("BUILDNAME"),
"date" : self.cfgData.getVar("DATE"),
"time" : self.cfgData.getVar("TIME"),
+ "hashservport" : self.cooker.hashservport,
}
worker.stdin.write(b"<cookerconfig>" + pickle.dumps(self.cooker.configuration) + b"</cookerconfig>")
@@ -1384,57 +1389,29 @@
cache[tid] = iscurrent
return iscurrent
- def validate_hashes(self, tocheck, data, presentcount=None, siginfo=False):
+ def validate_hashes(self, tocheck, data, currentcount=None, siginfo=False):
valid = set()
if self.hashvalidate:
- sq_hash = []
- sq_hashfn = []
- sq_unihash = []
- sq_fn = []
- sq_taskname = []
- sq_task = []
+ sq_data = {}
+ sq_data['hash'] = {}
+ sq_data['hashfn'] = {}
+ sq_data['unihash'] = {}
for tid in tocheck:
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
+ sq_data['hash'][tid] = self.rqdata.runtaskentries[tid].hash
+ sq_data['hashfn'][tid] = self.rqdata.dataCaches[mc].hashfn[taskfn]
+ sq_data['unihash'][tid] = self.rqdata.runtaskentries[tid].unihash
- sq_fn.append(fn)
- sq_hashfn.append(self.rqdata.dataCaches[mc].hashfn[taskfn])
- sq_hash.append(self.rqdata.runtaskentries[tid].hash)
- sq_unihash.append(self.rqdata.runtaskentries[tid].unihash)
- sq_taskname.append(taskname)
- sq_task.append(tid)
-
- if presentcount is not None:
- data.setVar("BB_SETSCENE_STAMPCURRENT_COUNT", presentcount)
-
- valid_ids = self.validate_hash(sq_fn, sq_taskname, sq_hash, sq_hashfn, siginfo, sq_unihash, data, presentcount)
-
- if presentcount is not None:
- data.delVar("BB_SETSCENE_STAMPCURRENT_COUNT")
-
- for v in valid_ids:
- valid.add(sq_task[v])
+ valid = self.validate_hash(sq_data, data, siginfo, currentcount)
return valid
- def validate_hash(self, sq_fn, sq_task, sq_hash, sq_hashfn, siginfo, sq_unihash, d, presentcount):
- locs = {"sq_fn" : sq_fn, "sq_task" : sq_task, "sq_hash" : sq_hash, "sq_hashfn" : sq_hashfn,
- "sq_unihash" : sq_unihash, "siginfo" : siginfo, "d" : d}
+ def validate_hash(self, sq_data, d, siginfo, currentcount):
+ locs = {"sq_data" : sq_data, "d" : d, "siginfo" : siginfo, "currentcount" : currentcount}
- # Backwards compatibility
- hashvalidate_args = ("(sq_fn, sq_task, sq_hash, sq_hashfn, d, siginfo=siginfo, sq_unihash=sq_unihash)",
- "(sq_fn, sq_task, sq_hash, sq_hashfn, d, siginfo=siginfo)",
- "(sq_fn, sq_task, sq_hash, sq_hashfn, d)")
+ # Metadata has **kwargs so args can be added, sq_data can also gain new fields
+ call = self.hashvalidate + "(sq_data, d, siginfo=siginfo, currentcount=currentcount)"
- for args in hashvalidate_args[:-1]:
- try:
- call = self.hashvalidate + args
- return bb.utils.better_eval(call, locs)
- except TypeError:
- continue
-
- # Call the last entry without a try...catch to propagate any thrown
- # TypeError
- call = self.hashvalidate + hashvalidate_args[-1]
return bb.utils.better_eval(call, locs)
def _execute_runqueue(self):
@@ -1516,6 +1493,7 @@
self.dm_event_handler_registered = False
if build_done and self.rqexe:
+ bb.parse.siggen.save_unitaskhashes()
self.teardown_workers()
if self.rqexe:
if self.rqexe.stats.failed:
@@ -1718,6 +1696,9 @@
self.sq_running = set()
self.sq_live = set()
+ self.updated_taskhash_queue = []
+ self.pending_migrations = set()
+
self.runq_buildable = set()
self.runq_running = set()
self.runq_complete = set()
@@ -1729,6 +1710,7 @@
self.stampcache = {}
+ self.holdoff_tasks = set()
self.sqdone = False
self.stats = RunQueueStats(len(self.rqdata.runtaskentries))
@@ -1751,7 +1733,10 @@
self.tasks_notcovered = set()
self.scenequeue_notneeded = set()
- self.coveredtopocess = set()
+ # We can't skip specified target tasks which aren't setscene tasks
+ self.cantskip = set(self.rqdata.target_tids)
+ self.cantskip.difference_update(self.rqdata.runq_setscene_tids)
+ self.cantskip.intersection_update(self.rqdata.runtaskentries)
schedulers = self.get_schedulers()
for scheduler in schedulers:
@@ -1763,9 +1748,9 @@
bb.fatal("Invalid scheduler '%s'. Available schedulers: %s" %
(self.scheduler, ", ".join(obj.name for obj in schedulers)))
- if len(self.rqdata.runq_setscene_tids) > 0:
- self.sqdata = SQData()
- build_scenequeue_data(self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self)
+ #if len(self.rqdata.runq_setscene_tids) > 0:
+ self.sqdata = SQData()
+ build_scenequeue_data(self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self)
def runqueue_process_waitpid(self, task, status):
@@ -1831,6 +1816,9 @@
if not self.rq.depvalidate:
return False
+ # Must not edit parent data
+ taskdeps = set(taskdeps)
+
taskdata = {}
taskdeps.add(task)
for dep in taskdeps:
@@ -1918,17 +1906,58 @@
self.stats.taskSkipped()
self.stats.taskCompleted()
+ def summarise_scenequeue_errors(self):
+ err = False
+ if not self.sqdone:
+ logger.debug(1, 'We could skip tasks %s', "\n".join(sorted(self.scenequeue_covered)))
+ completeevent = sceneQueueComplete(self.sq_stats, self.rq)
+ bb.event.fire(completeevent, self.cfgData)
+ if self.sq_deferred:
+ logger.error("Scenequeue had deferred entries: %s" % pprint.pformat(self.sq_deferred))
+ err = True
+ if self.updated_taskhash_queue:
+ logger.error("Scenequeue had unprocessed changed taskhash entries: %s" % pprint.pformat(self.updated_taskhash_queue))
+ err = True
+ if self.holdoff_tasks:
+ logger.error("Scenequeue had holdoff tasks: %s" % pprint.pformat(self.holdoff_tasks))
+ err = True
+
+ for tid in self.rqdata.runq_setscene_tids:
+ if tid not in self.scenequeue_covered and tid not in self.scenequeue_notcovered:
+ err = True
+ logger.error("Setscene Task %s was never marked as covered or not covered" % tid)
+ if tid not in self.sq_buildable:
+ err = True
+ logger.error("Setscene Task %s was never marked as buildable" % tid)
+ if tid not in self.sq_running:
+ err = True
+ logger.error("Setscene Task %s was never marked as running" % tid)
+
+ for x in self.rqdata.runtaskentries:
+ if x not in self.tasks_covered and x not in self.tasks_notcovered:
+ logger.error("Task %s was never moved from the setscene queue" % x)
+ err = True
+ if x not in self.tasks_scenequeue_done:
+ logger.error("Task %s was never processed by the setscene code" % x)
+ err = True
+ if len(self.rqdata.runtaskentries[x].depends) == 0 and x not in self.runq_buildable:
+ logger.error("Task %s was never marked as buildable by the setscene code" % x)
+ err = True
+ return err
+
+
def execute(self):
"""
Run the tasks in a queue prepared by prepare_runqueue
"""
self.rq.read_workers()
+ self.process_possible_migrations()
task = None
if not self.sqdone and self.can_start_task():
# Find the next setscene to run
- for nexttask in self.rqdata.runq_setscene_tids:
+ for nexttask in sorted(self.rqdata.runq_setscene_tids):
if nexttask in self.sq_buildable and nexttask not in self.sq_running and self.sqdata.stamps[nexttask] not in self.build_stamps.values():
if nexttask not in self.sqdata.unskippable and len(self.sqdata.sq_revdeps[nexttask]) > 0 and self.sqdata.sq_revdeps[nexttask].issubset(self.scenequeue_covered) and self.check_dependencies(nexttask, self.sqdata.sq_revdeps[nexttask]):
if nexttask not in self.rqdata.target_tids:
@@ -1938,6 +1967,10 @@
if nexttask in self.sq_deferred:
del self.sq_deferred[nexttask]
return True
+ # If covered tasks are running, need to wait for them to complete
+ for t in self.sqdata.sq_covered_tasks[nexttask]:
+ if t in self.runq_running and t not in self.runq_complete:
+ continue
if nexttask in self.sq_deferred:
if self.sq_deferred[nexttask] not in self.runq_complete:
continue
@@ -2006,24 +2039,10 @@
if self.can_start_task():
return True
- if not self.sq_live and not self.sqdone and not self.sq_deferred:
+ if not self.sq_live and not self.sqdone and not self.sq_deferred and not self.updated_taskhash_queue and not self.holdoff_tasks:
logger.info("Setscene tasks completed")
- logger.debug(1, 'We could skip tasks %s', "\n".join(sorted(self.scenequeue_covered)))
- completeevent = sceneQueueComplete(self.sq_stats, self.rq)
- bb.event.fire(completeevent, self.cfgData)
-
- err = False
- for x in self.rqdata.runtaskentries:
- if x not in self.tasks_covered and x not in self.tasks_notcovered:
- logger.error("Task %s was never moved from the setscene queue" % x)
- err = True
- if x not in self.tasks_scenequeue_done:
- logger.error("Task %s was never processed by the setscene code" % x)
- err = True
- if len(self.rqdata.runtaskentries[x].depends) == 0 and x not in self.runq_buildable:
- logger.error("Task %s was never marked as buildable by the setscene code" % x)
- err = True
+ err = self.summarise_scenequeue_errors()
if err:
self.rq.state = runQueueFailed
return True
@@ -2119,14 +2138,22 @@
return True
# Sanity Checks
+ err = self.summarise_scenequeue_errors()
for task in self.rqdata.runtaskentries:
if task not in self.runq_buildable:
logger.error("Task %s never buildable!", task)
+ err = True
elif task not in self.runq_running:
logger.error("Task %s never ran!", task)
+ err = True
elif task not in self.runq_complete:
logger.error("Task %s never completed!", task)
- self.rq.state = runQueueComplete
+ err = True
+
+ if err:
+ self.rq.state = runQueueFailed
+ else:
+ self.rq.state = runQueueComplete
return True
@@ -2144,7 +2171,7 @@
# as most code can't handle them
def build_taskdepdata(self, task):
taskdepdata = {}
- next = self.rqdata.runtaskentries[task].depends
+ next = self.rqdata.runtaskentries[task].depends.copy()
next.add(task)
next = self.filtermcdeps(task, next)
while next:
@@ -2166,57 +2193,153 @@
#bb.note("Task %s: " % task + str(taskdepdata).replace("], ", "],\n"))
return taskdepdata
- def scenequeue_process_notcovered(self, task):
- if len(self.rqdata.runtaskentries[task].depends) == 0:
- self.setbuildable(task)
- notcovered = set([task])
- while notcovered:
- new = set()
- for t in notcovered:
- for deptask in self.rqdata.runtaskentries[t].depends:
- if deptask in notcovered or deptask in new or deptask in self.rqdata.runq_setscene_tids or deptask in self.tasks_notcovered:
- continue
- logger.debug(1, 'Task %s depends on non-setscene task %s so not skipping' % (t, deptask))
- new.add(deptask)
- self.tasks_notcovered.add(deptask)
- if len(self.rqdata.runtaskentries[deptask].depends) == 0:
- self.setbuildable(deptask)
- notcovered = new
+ def update_holdofftasks(self):
+ self.holdoff_tasks = set()
- def scenequeue_process_unskippable(self, task):
- # Look up the dependency chain for non-setscene things which depend on this task
- # and mark as 'done'/notcovered
- ready = set([task])
- while ready:
- new = set()
- for t in ready:
- for deptask in self.rqdata.runtaskentries[t].revdeps:
- if deptask in ready or deptask in new or deptask in self.tasks_scenequeue_done or deptask in self.rqdata.runq_setscene_tids:
- continue
- if self.rqdata.runtaskentries[deptask].depends.issubset(self.tasks_scenequeue_done):
- new.add(deptask)
- self.tasks_scenequeue_done.add(deptask)
- self.tasks_notcovered.add(deptask)
- #logger.warning("Up: " + str(deptask))
- ready = new
+ for tid in self.rqdata.runq_setscene_tids:
+ if tid not in self.scenequeue_covered and tid not in self.scenequeue_notcovered:
+ self.holdoff_tasks.add(tid)
+
+ for tid in self.holdoff_tasks.copy():
+ for dep in self.sqdata.sq_covered_tasks[tid]:
+ if dep not in self.runq_complete:
+ self.holdoff_tasks.add(dep)
+
+ def process_possible_migrations(self):
+
+ changed = set()
+ for tid, unihash in self.updated_taskhash_queue.copy():
+ if tid in self.runq_running and tid not in self.runq_complete:
+ continue
+
+ self.updated_taskhash_queue.remove((tid, unihash))
+
+ if unihash != self.rqdata.runtaskentries[tid].unihash:
+ logger.info("Task %s unihash changed to %s" % (tid, unihash))
+ self.rqdata.runtaskentries[tid].unihash = unihash
+ bb.parse.siggen.set_unihash(tid, unihash)
+
+ # Work out all tasks which depend on this one
+ total = set()
+ next = set(self.rqdata.runtaskentries[tid].revdeps)
+ while next:
+ current = next.copy()
+ total = total |next
+ next = set()
+ for ntid in current:
+ next |= self.rqdata.runtaskentries[ntid].revdeps
+ next.difference_update(total)
+
+ # Now iterate those tasks in dependency order to regenerate their taskhash/unihash
+ done = set()
+ next = set(self.rqdata.runtaskentries[tid].revdeps)
+ while next:
+ current = next.copy()
+ next = set()
+ for tid in current:
+ if not self.rqdata.runtaskentries[tid].depends.isdisjoint(total):
+ continue
+ procdep = []
+ for dep in self.rqdata.runtaskentries[tid].depends:
+ procdep.append(dep)
+ orighash = self.rqdata.runtaskentries[tid].hash
+ self.rqdata.runtaskentries[tid].hash = bb.parse.siggen.get_taskhash(tid, procdep, self.rqdata.dataCaches[mc_from_tid(tid)])
+ origuni = self.rqdata.runtaskentries[tid].unihash
+ self.rqdata.runtaskentries[tid].unihash = bb.parse.siggen.get_unihash(tid)
+ logger.debug(1, "Task %s hash changes: %s->%s %s->%s" % (tid, orighash, self.rqdata.runtaskentries[tid].hash, origuni, self.rqdata.runtaskentries[tid].unihash))
+ next |= self.rqdata.runtaskentries[tid].revdeps
+ changed.add(tid)
+ total.remove(tid)
+ next.intersection_update(total)
+
+ if changed:
+ for mc in self.rq.worker:
+ self.rq.worker[mc].process.stdin.write(b"<newtaskhashes>" + pickle.dumps(bb.parse.siggen.get_taskhashes()) + b"</newtaskhashes>")
+ for mc in self.rq.fakeworker:
+ self.rq.fakeworker[mc].process.stdin.write(b"<newtaskhashes>" + pickle.dumps(bb.parse.siggen.get_taskhashes()) + b"</newtaskhashes>")
+
+ logger.debug(1, pprint.pformat("Tasks changed:\n%s" % (changed)))
+
+ for tid in changed:
+ if tid not in self.rqdata.runq_setscene_tids:
+ continue
+ valid = self.rq.validate_hashes(set([tid]), self.cooker.data, None, False)
+ if not valid:
+ continue
+ if tid in self.runq_running:
+ continue
+ if tid not in self.pending_migrations:
+ self.pending_migrations.add(tid)
+
+ for tid in self.pending_migrations.copy():
+ valid = True
+ # Check no tasks this covers are running
+ for dep in self.sqdata.sq_covered_tasks[tid]:
+ if dep in self.runq_running and dep not in self.runq_complete:
+ logger.debug(2, "Task %s is running which blocks setscene for %s from running" % (dep, tid))
+ valid = False
+ break
+ if not valid:
+ continue
+
+ self.pending_migrations.remove(tid)
+
+ if tid in self.tasks_scenequeue_done:
+ self.tasks_scenequeue_done.remove(tid)
+ for dep in self.sqdata.sq_covered_tasks[tid]:
+ if dep not in self.runq_complete:
+ if dep in self.tasks_scenequeue_done and dep not in self.sqdata.unskippable:
+ self.tasks_scenequeue_done.remove(dep)
+
+ if tid in self.sq_buildable:
+ self.sq_buildable.remove(tid)
+ if tid in self.sq_running:
+ self.sq_running.remove(tid)
+ if self.sqdata.sq_revdeps[tid].issubset(self.scenequeue_covered | self.scenequeue_notcovered):
+ if tid not in self.sq_buildable:
+ self.sq_buildable.add(tid)
+ if len(self.sqdata.sq_revdeps[tid]) == 0:
+ self.sq_buildable.add(tid)
+
+ if tid in self.sqdata.outrightfail:
+ self.sqdata.outrightfail.remove(tid)
+ if tid in self.scenequeue_notcovered:
+ self.scenequeue_notcovered.remove(tid)
+ if tid in self.scenequeue_covered:
+ self.scenequeue_covered.remove(tid)
+ if tid in self.scenequeue_notneeded:
+ self.scenequeue_notneeded.remove(tid)
+
+ (mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
+ self.sqdata.stamps[tid] = bb.build.stampfile(taskname + "_setscene", self.rqdata.dataCaches[mc], taskfn, noextra=True)
+
+ if tid in self.stampcache:
+ del self.stampcache[tid]
+
+ if tid in self.build_stamps:
+ del self.build_stamps[tid]
+
+ logger.info("Setscene task %s now valid and being rerun" % tid)
+ self.sqdone = False
+
+ if changed:
+ self.update_holdofftasks()
def scenequeue_updatecounters(self, task, fail=False):
- for dep in self.sqdata.sq_deps[task]:
+
+ for dep in sorted(self.sqdata.sq_deps[task]):
if fail and task in self.sqdata.sq_harddeps and dep in self.sqdata.sq_harddeps[task]:
logger.debug(2, "%s was unavailable and is a hard dependency of %s so skipping" % (task, dep))
self.sq_task_failoutright(dep)
continue
- if task not in self.sqdata.sq_revdeps2[dep]:
- # May already have been removed by the fail case above
- continue
- self.sqdata.sq_revdeps2[dep].remove(task)
- if len(self.sqdata.sq_revdeps2[dep]) == 0:
- self.sq_buildable.add(dep)
+ if self.sqdata.sq_revdeps[dep].issubset(self.scenequeue_covered | self.scenequeue_notcovered):
+ if dep not in self.sq_buildable:
+ self.sq_buildable.add(dep)
next = set([task])
while next:
new = set()
- for t in next:
+ for t in sorted(next):
self.tasks_scenequeue_done.add(t)
# Look down the dependency chain for non-setscene things which this task depends on
# and mark as 'done'
@@ -2225,39 +2348,31 @@
continue
if self.rqdata.runtaskentries[dep].revdeps.issubset(self.tasks_scenequeue_done):
new.add(dep)
- #logger.warning(" Down: " + dep)
next = new
- if task in self.sqdata.unskippable:
- self.scenequeue_process_unskippable(task)
+ notcovered = set(self.scenequeue_notcovered)
+ notcovered |= self.cantskip
+ for tid in self.scenequeue_notcovered:
+ notcovered |= self.sqdata.sq_covered_tasks[tid]
+ notcovered |= self.sqdata.unskippable.difference(self.rqdata.runq_setscene_tids)
+ notcovered.intersection_update(self.tasks_scenequeue_done)
- if task in self.scenequeue_notcovered:
- logger.debug(1, 'Not skipping setscene task %s', task)
- self.scenequeue_process_notcovered(task)
- elif task in self.scenequeue_covered:
- logger.debug(1, 'Queued setscene task %s', task)
- self.coveredtopocess.add(task)
+ covered = set(self.scenequeue_covered)
+ for tid in self.scenequeue_covered:
+ covered |= self.sqdata.sq_covered_tasks[tid]
+ covered.difference_update(notcovered)
+ covered.intersection_update(self.tasks_scenequeue_done)
- for task in self.coveredtopocess.copy():
- if self.sqdata.sq_covered_tasks[task].issubset(self.tasks_scenequeue_done):
- logger.debug(1, 'Processing setscene task %s', task)
- covered = self.sqdata.sq_covered_tasks[task]
- covered.add(task)
+ for tid in notcovered | covered:
+ if len(self.rqdata.runtaskentries[tid].depends) == 0:
+ self.setbuildable(tid)
+ elif self.rqdata.runtaskentries[tid].depends.issubset(self.runq_complete):
+ self.setbuildable(tid)
- # If a task is in target_tids and isn't a setscene task, we can't skip it.
- cantskip = covered.intersection(self.rqdata.target_tids).difference(self.rqdata.runq_setscene_tids)
- for tid in cantskip:
- self.tasks_notcovered.add(tid)
- self.scenequeue_process_notcovered(tid)
- covered.difference_update(cantskip)
+ self.tasks_covered = covered
+ self.tasks_notcovered = notcovered
- # Remove notcovered tasks
- covered.difference_update(self.tasks_notcovered)
- self.tasks_covered.update(covered)
- self.coveredtopocess.remove(task)
- for tid in covered:
- if len(self.rqdata.runtaskentries[tid].depends) == 0:
- self.setbuildable(tid)
+ self.update_holdofftasks()
def sq_task_completeoutright(self, task):
"""
@@ -2268,7 +2383,6 @@
logger.debug(1, 'Found task %s which could be accelerated', task)
self.scenequeue_covered.add(task)
- self.tasks_covered.add(task)
self.scenequeue_updatecounters(task)
def sq_check_taskfail(self, task):
@@ -2289,7 +2403,6 @@
self.sq_stats.taskFailed()
bb.event.fire(sceneQueueTaskFailed(task, self.sq_stats, result, self), self.cfgData)
self.scenequeue_notcovered.add(task)
- self.tasks_notcovered.add(task)
self.scenequeue_updatecounters(task, True)
self.sq_check_taskfail(task)
@@ -2299,7 +2412,6 @@
self.sq_stats.taskSkipped()
self.sq_stats.taskCompleted()
self.scenequeue_notcovered.add(task)
- self.tasks_notcovered.add(task)
self.scenequeue_updatecounters(task, True)
def sq_task_skip(self, task):
@@ -2377,8 +2489,6 @@
self.sq_deps = {}
# SceneQueue reverse dependencies
self.sq_revdeps = {}
- # Copy of reverse dependencies used by sq processing code
- self.sq_revdeps2 = {}
# Injected inter-setscene task dependencies
self.sq_harddeps = {}
# Cache of stamp files so duplicates can't run in parallel
@@ -2458,27 +2568,28 @@
rqdata.init_progress_reporter.next_stage()
- # Build a list of setscene tasks which are "unskippable"
- # These are direct endpoints referenced by the build
+ # Build a list of tasks which are "unskippable"
+ # These are direct endpoints referenced by the build upto and including setscene tasks
# Take the build endpoints (no revdeps) and find the sstate tasks they depend upon
new = True
for tid in rqdata.runtaskentries:
if len(rqdata.runtaskentries[tid].revdeps) == 0:
sqdata.unskippable.add(tid)
+ sqdata.unskippable |= sqrq.cantskip
while new:
new = False
- for tid in sqdata.unskippable.copy():
+ orig = sqdata.unskippable.copy()
+ for tid in sorted(orig, reverse=True):
if tid in rqdata.runq_setscene_tids:
continue
- sqdata.unskippable.remove(tid)
if len(rqdata.runtaskentries[tid].depends) == 0:
# These are tasks which have no setscene tasks in their chain, need to mark as directly buildable
- sqrq.tasks_notcovered.add(tid)
- sqrq.tasks_scenequeue_done.add(tid)
sqrq.setbuildable(tid)
- sqrq.scenequeue_process_unskippable(tid)
sqdata.unskippable |= rqdata.runtaskentries[tid].depends
- new = True
+ if sqdata.unskippable != orig:
+ new = True
+
+ sqrq.tasks_scenequeue_done |= sqdata.unskippable.difference(rqdata.runq_setscene_tids)
rqdata.init_progress_reporter.next_stage(len(rqdata.runtaskentries))
@@ -2537,7 +2648,6 @@
# bb.warn("Task %s_setscene: is %s " % (tid, data))
sqdata.sq_revdeps = sq_revdeps_squash
- sqdata.sq_revdeps2 = copy.deepcopy(sqdata.sq_revdeps)
sqdata.sq_covered_tasks = sq_collated_deps
# Build reverse version of revdeps to populate deps structure
@@ -2562,7 +2672,7 @@
stamppresent = []
tocheck = set()
- for tid in sqdata.sq_revdeps:
+ for tid in sorted(sqdata.sq_revdeps):
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
taskdep = rqdata.dataCaches[mc].task_deps[taskfn]
@@ -2595,7 +2705,7 @@
hashes = {}
for mc in sorted(multiconfigs):
- for tid in sqdata.sq_revdeps:
+ for tid in sorted(sqdata.sq_revdeps):
if mc_from_tid(tid) != mc:
continue
if tid not in valid_new and tid not in noexec and tid not in sqrq.scenequeue_notcovered:
@@ -2715,6 +2825,15 @@
runQueueEvent.__init__(self, task, stats, rq)
self.reason = reason
+class taskUniHashUpdate(bb.event.Event):
+ """
+ Base runQueue event class
+ """
+ def __init__(self, task, unihash):
+ self.taskid = task
+ self.unihash = unihash
+ bb.event.Event.__init__(self)
+
class runQueuePipe():
"""
Abstraction for a pipe between a worker thread and the server
@@ -2757,6 +2876,8 @@
except ValueError as e:
bb.msg.fatal("RunQueue", "failed load pickle '%s': '%s'" % (e, self.queue[7:index]))
bb.event.fire_from_worker(event, self.d)
+ if isinstance(event, taskUniHashUpdate):
+ self.rqexec.updated_taskhash_queue.append((event.taskid, event.unihash))
found = True
self.queue = self.queue[index+8:]
index = self.queue.find(b"</event>")
diff --git a/poky/bitbake/lib/bb/server/process.py b/poky/bitbake/lib/bb/server/process.py
index f901fe5..69aae62 100644
--- a/poky/bitbake/lib/bb/server/process.py
+++ b/poky/bitbake/lib/bb/server/process.py
@@ -456,7 +456,10 @@
self.configuration.setServerRegIdleCallback(server.register_idle_function)
os.close(self.readypipe)
writer = ConnectionWriter(self.readypipein)
- self.cooker = bb.cooker.BBCooker(self.configuration, self.featureset)
+ try:
+ self.cooker = bb.cooker.BBCooker(self.configuration, self.featureset)
+ except bb.BBHandledException:
+ return None
writer.send("r")
writer.close()
server.cooker = self.cooker
diff --git a/poky/bitbake/lib/bb/siggen.py b/poky/bitbake/lib/bb/siggen.py
index 6a729f3..b503559 100644
--- a/poky/bitbake/lib/bb/siggen.py
+++ b/poky/bitbake/lib/bb/siggen.py
@@ -12,6 +12,7 @@
import difflib
import simplediff
from bb.checksum import FileChecksumCache
+from bb import runqueue
logger = logging.getLogger('BitBake.SigGen')
@@ -41,17 +42,17 @@
self.runtaskdeps = {}
self.file_checksum_values = {}
self.taints = {}
+ self.unitaskhashes = {}
def finalise(self, fn, d, varient):
return
- def get_unihash(self, task):
- return self.taskhash[task]
+ def get_unihash(self, tid):
+ return self.taskhash[tid]
- def get_taskhash(self, fn, task, deps, dataCache):
- k = fn + "." + task
- self.taskhash[k] = hashlib.sha256(k.encode("utf-8")).hexdigest()
- return self.taskhash[k]
+ def get_taskhash(self, tid, deps, dataCache):
+ self.taskhash[tid] = hashlib.sha256(tid.encode("utf-8")).hexdigest()
+ return self.taskhash[tid]
def writeout_file_checksum_cache(self):
"""Write/update the file checksum cache onto disk"""
@@ -73,14 +74,23 @@
return
def get_taskdata(self):
- return (self.runtaskdeps, self.taskhash, self.file_checksum_values, self.taints, self.basehash)
+ return (self.runtaskdeps, self.taskhash, self.file_checksum_values, self.taints, self.basehash, self.unitaskhashes)
def set_taskdata(self, data):
- self.runtaskdeps, self.taskhash, self.file_checksum_values, self.taints, self.basehash = data
+ self.runtaskdeps, self.taskhash, self.file_checksum_values, self.taints, self.basehash, self.unitaskhashes = data
def reset(self, data):
self.__init__(data)
+ def get_taskhashes(self):
+ return self.taskhash, self.unitaskhashes
+
+ def set_taskhashes(self, hashes):
+ self.taskhash, self.unitaskhashes = hashes
+
+ def save_unitaskhashes(self):
+ return
+
class SignatureGeneratorBasic(SignatureGenerator):
"""
@@ -96,7 +106,6 @@
self.taints = {}
self.gendeps = {}
self.lookupcache = {}
- self.pkgnameextract = re.compile(r"(?P<fn>.*)\..*")
self.basewhitelist = set((data.getVar("BB_HASHBASE_WHITELIST") or "").split())
self.taskwhitelist = None
self.init_rundepcheck(data)
@@ -107,6 +116,9 @@
else:
self.checksum_cache = None
+ self.unihash_cache = bb.cache.SimpleCache("1")
+ self.unitaskhashes = self.unihash_cache.init_cache(data, "bb_unihashes.dat", {})
+
def init_rundepcheck(self, data):
self.taskwhitelist = data.getVar("BB_HASHTASK_WHITELIST") or None
if self.taskwhitelist:
@@ -122,16 +134,16 @@
taskdeps, basehash = bb.data.generate_dependency_hash(tasklist, gendeps, lookupcache, self.basewhitelist, fn)
for task in tasklist:
- k = fn + "." + task
- if not ignore_mismatch and k in self.basehash and self.basehash[k] != basehash[k]:
- bb.error("When reparsing %s, the basehash value changed from %s to %s. The metadata is not deterministic and this needs to be fixed." % (k, self.basehash[k], basehash[k]))
+ tid = fn + ":" + task
+ if not ignore_mismatch and tid in self.basehash and self.basehash[tid] != basehash[tid]:
+ bb.error("When reparsing %s, the basehash value changed from %s to %s. The metadata is not deterministic and this needs to be fixed." % (tid, self.basehash[tid], basehash[tid]))
bb.error("The following commands may help:")
cmd = "$ bitbake %s -c%s" % (d.getVar('PN'), task)
# Make sure sigdata is dumped before run printdiff
bb.error("%s -Snone" % cmd)
bb.error("Then:")
bb.error("%s -Sprintdiff\n" % cmd)
- self.basehash[k] = basehash[k]
+ self.basehash[tid] = basehash[tid]
self.taskdeps[fn] = taskdeps
self.gendeps[fn] = gendeps
@@ -158,7 +170,7 @@
# self.dump_sigtask(fn, task, d.getVar("STAMP"), False)
for task in taskdeps:
- d.setVar("BB_BASEHASH_task-%s" % task, self.basehash[fn + "." + task])
+ d.setVar("BB_BASEHASH_task-%s" % task, self.basehash[fn + ":" + task])
def rundep_check(self, fn, recipename, task, dep, depname, dataCache):
# Return True if we should keep the dependency, False to drop it
@@ -178,33 +190,26 @@
pass
return taint
- def get_taskhash(self, fn, task, deps, dataCache):
+ def get_taskhash(self, tid, deps, dataCache):
- mc = ''
- if fn.startswith('mc:'):
- mc = fn.split(':')[1]
- k = fn + "." + task
+ (mc, _, task, fn) = bb.runqueue.split_tid_mcfn(tid)
- data = dataCache.basetaskhash[k]
- self.basehash[k] = data
- self.runtaskdeps[k] = []
- self.file_checksum_values[k] = []
+ data = dataCache.basetaskhash[tid]
+ self.basehash[tid] = data
+ self.runtaskdeps[tid] = []
+ self.file_checksum_values[tid] = []
recipename = dataCache.pkg_fn[fn]
for dep in sorted(deps, key=clean_basepath):
- pkgname = self.pkgnameextract.search(dep).group('fn')
- if mc:
- depmc = pkgname.split(':')[1]
- if mc != depmc:
- continue
- if dep.startswith("mc:") and not mc:
+ (depmc, _, deptaskname, depfn) = bb.runqueue.split_tid_mcfn(dep)
+ if mc != depmc:
continue
- depname = dataCache.pkg_fn[pkgname]
+ depname = dataCache.pkg_fn[depfn]
if not self.rundep_check(fn, recipename, task, dep, depname, dataCache):
continue
if dep not in self.taskhash:
bb.fatal("%s is not in taskhash, caller isn't calling in dependency order?" % dep)
data = data + self.get_unihash(dep)
- self.runtaskdeps[k].append(dep)
+ self.runtaskdeps[tid].append(dep)
if task in dataCache.file_checksums[fn]:
if self.checksum_cache:
@@ -212,7 +217,7 @@
else:
checksums = bb.fetch2.get_file_checksums(dataCache.file_checksums[fn][task], recipename)
for (f,cs) in checksums:
- self.file_checksum_values[k].append((f,cs))
+ self.file_checksum_values[tid].append((f,cs))
if cs:
data = data + cs
@@ -222,16 +227,16 @@
import uuid
taint = str(uuid.uuid4())
data = data + taint
- self.taints[k] = "nostamp:" + taint
+ self.taints[tid] = "nostamp:" + taint
taint = self.read_taint(fn, task, dataCache.stamp[fn])
if taint:
data = data + taint
- self.taints[k] = taint
- logger.warning("%s is tainted from a forced run" % k)
+ self.taints[tid] = taint
+ logger.warning("%s is tainted from a forced run" % tid)
h = hashlib.sha256(data.encode("utf-8")).hexdigest()
- self.taskhash[k] = h
+ self.taskhash[tid] = h
#d.setVar("BB_TASKHASH_task-%s" % task, taskhash[task])
return h
@@ -244,17 +249,20 @@
bb.fetch2.fetcher_parse_save()
bb.fetch2.fetcher_parse_done()
+ def save_unitaskhashes(self):
+ self.unihash_cache.save(self.unitaskhashes)
+
def dump_sigtask(self, fn, task, stampbase, runtime):
- k = fn + "." + task
+ tid = fn + ":" + task
referencestamp = stampbase
if isinstance(runtime, str) and runtime.startswith("customfile"):
sigfile = stampbase
referencestamp = runtime[11:]
- elif runtime and k in self.taskhash:
- sigfile = stampbase + "." + task + ".sigdata" + "." + self.taskhash[k]
+ elif runtime and tid in self.taskhash:
+ sigfile = stampbase + "." + task + ".sigdata" + "." + self.taskhash[tid]
else:
- sigfile = stampbase + "." + task + ".sigbasedata" + "." + self.basehash[k]
+ sigfile = stampbase + "." + task + ".sigbasedata" + "." + self.basehash[tid]
bb.utils.mkdirhier(os.path.dirname(sigfile))
@@ -263,7 +271,7 @@
data['basewhitelist'] = self.basewhitelist
data['taskwhitelist'] = self.taskwhitelist
data['taskdeps'] = self.taskdeps[fn][task]
- data['basehash'] = self.basehash[k]
+ data['basehash'] = self.basehash[tid]
data['gendeps'] = {}
data['varvals'] = {}
data['varvals'][task] = self.lookupcache[fn][task]
@@ -273,30 +281,30 @@
data['gendeps'][dep] = self.gendeps[fn][dep]
data['varvals'][dep] = self.lookupcache[fn][dep]
- if runtime and k in self.taskhash:
- data['runtaskdeps'] = self.runtaskdeps[k]
- data['file_checksum_values'] = [(os.path.basename(f), cs) for f,cs in self.file_checksum_values[k]]
+ if runtime and tid in self.taskhash:
+ data['runtaskdeps'] = self.runtaskdeps[tid]
+ data['file_checksum_values'] = [(os.path.basename(f), cs) for f,cs in self.file_checksum_values[tid]]
data['runtaskhashes'] = {}
for dep in data['runtaskdeps']:
data['runtaskhashes'][dep] = self.get_unihash(dep)
- data['taskhash'] = self.taskhash[k]
+ data['taskhash'] = self.taskhash[tid]
taint = self.read_taint(fn, task, referencestamp)
if taint:
data['taint'] = taint
- if runtime and k in self.taints:
- if 'nostamp:' in self.taints[k]:
- data['taint'] = self.taints[k]
+ if runtime and tid in self.taints:
+ if 'nostamp:' in self.taints[tid]:
+ data['taint'] = self.taints[tid]
computed_basehash = calc_basehash(data)
- if computed_basehash != self.basehash[k]:
- bb.error("Basehash mismatch %s versus %s for %s" % (computed_basehash, self.basehash[k], k))
- if runtime and k in self.taskhash:
+ if computed_basehash != self.basehash[tid]:
+ bb.error("Basehash mismatch %s versus %s for %s" % (computed_basehash, self.basehash[tid], tid))
+ if runtime and tid in self.taskhash:
computed_taskhash = calc_taskhash(data)
- if computed_taskhash != self.taskhash[k]:
- bb.error("Taskhash mismatch %s versus %s for %s" % (computed_taskhash, self.taskhash[k], k))
- sigfile = sigfile.replace(self.taskhash[k], computed_taskhash)
+ if computed_taskhash != self.taskhash[tid]:
+ bb.error("Taskhash mismatch %s versus %s for %s" % (computed_taskhash, self.taskhash[tid], tid))
+ sigfile = sigfile.replace(self.taskhash[tid], computed_taskhash)
fd, tmpfile = tempfile.mkstemp(dir=os.path.dirname(sigfile), prefix="sigtask.")
try:
@@ -316,34 +324,33 @@
if fn in self.taskdeps:
for task in self.taskdeps[fn]:
tid = fn + ":" + task
- (mc, _, _) = bb.runqueue.split_tid(tid)
- k = fn + "." + task
- if k not in self.taskhash:
+ mc = bb.runqueue.mc_from_tid(tid)
+ if tid not in self.taskhash:
continue
- if dataCaches[mc].basetaskhash[k] != self.basehash[k]:
- bb.error("Bitbake's cached basehash does not match the one we just generated (%s)!" % k)
- bb.error("The mismatched hashes were %s and %s" % (dataCaches[mc].basetaskhash[k], self.basehash[k]))
+ if dataCaches[mc].basetaskhash[tid] != self.basehash[tid]:
+ bb.error("Bitbake's cached basehash does not match the one we just generated (%s)!" % tid)
+ bb.error("The mismatched hashes were %s and %s" % (dataCaches[mc].basetaskhash[tid], self.basehash[tid]))
self.dump_sigtask(fn, task, dataCaches[mc].stamp[fn], True)
class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
name = "basichash"
- def get_stampfile_hash(self, task):
- if task in self.taskhash:
- return self.taskhash[task]
+ def get_stampfile_hash(self, tid):
+ if tid in self.taskhash:
+ return self.taskhash[tid]
# If task is not in basehash, then error
- return self.basehash[task]
+ return self.basehash[tid]
def stampfile(self, stampbase, fn, taskname, extrainfo, clean=False):
if taskname != "do_setscene" and taskname.endswith("_setscene"):
- k = fn + "." + taskname[:-9]
+ tid = fn + ":" + taskname[:-9]
else:
- k = fn + "." + taskname
+ tid = fn + ":" + taskname
if clean:
h = "*"
else:
- h = self.get_stampfile_hash(k)
+ h = self.get_stampfile_hash(tid)
return ("%s.%s.%s.%s" % (stampbase, taskname, h, extrainfo)).rstrip('.')
@@ -354,6 +361,187 @@
bb.note("Tainting hash to force rebuild of task %s, %s" % (fn, task))
bb.build.write_taint(task, d, fn)
+class SignatureGeneratorUniHashMixIn(object):
+ def get_taskdata(self):
+ return (self.server, self.method) + super().get_taskdata()
+
+ def set_taskdata(self, data):
+ self.server, self.method = data[:2]
+ super().set_taskdata(data[2:])
+
+ def __get_task_unihash_key(self, tid):
+ # TODO: The key only *needs* to be the taskhash, the tid is just
+ # convenient
+ return '%s:%s' % (tid, self.taskhash[tid])
+
+ def get_stampfile_hash(self, tid):
+ if tid in self.taskhash:
+ # If a unique hash is reported, use it as the stampfile hash. This
+ # ensures that if a task won't be re-run if the taskhash changes,
+ # but it would result in the same output hash
+ unihash = self.unitaskhashes.get(self.__get_task_unihash_key(tid), None)
+ if unihash is not None:
+ return unihash
+
+ return super().get_stampfile_hash(tid)
+
+ def set_unihash(self, tid, unihash):
+ self.unitaskhashes[self.__get_task_unihash_key(tid)] = unihash
+
+ def get_unihash(self, tid):
+ import urllib
+ import json
+
+ taskhash = self.taskhash[tid]
+
+ key = self.__get_task_unihash_key(tid)
+
+ # TODO: This cache can grow unbounded. It probably only needs to keep
+ # for each task
+ unihash = self.unitaskhashes.get(key, None)
+ if unihash is not None:
+ return unihash
+
+ # In the absence of being able to discover a unique hash from the
+ # server, make it be equivalent to the taskhash. The unique "hash" only
+ # really needs to be a unique string (not even necessarily a hash), but
+ # making it match the taskhash has a few advantages:
+ #
+ # 1) All of the sstate code that assumes hashes can be the same
+ # 2) It provides maximal compatibility with builders that don't use
+ # an equivalency server
+ # 3) The value is easy for multiple independent builders to derive the
+ # same unique hash from the same input. This means that if the
+ # independent builders find the same taskhash, but it isn't reported
+ # to the server, there is a better chance that they will agree on
+ # the unique hash.
+ unihash = taskhash
+
+ try:
+ url = '%s/v1/equivalent?%s' % (self.server,
+ urllib.parse.urlencode({'method': self.method, 'taskhash': self.taskhash[tid]}))
+
+ request = urllib.request.Request(url)
+ response = urllib.request.urlopen(request)
+ data = response.read().decode('utf-8')
+
+ json_data = json.loads(data)
+
+ if json_data:
+ unihash = json_data['unihash']
+ # A unique hash equal to the taskhash is not very interesting,
+ # so it is reported it at debug level 2. If they differ, that
+ # is much more interesting, so it is reported at debug level 1
+ bb.debug((1, 2)[unihash == taskhash], 'Found unihash %s in place of %s for %s from %s' % (unihash, taskhash, tid, self.server))
+ else:
+ bb.debug(2, 'No reported unihash for %s:%s from %s' % (tid, taskhash, self.server))
+ except urllib.error.URLError as e:
+ bb.warn('Failure contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
+ except (KeyError, json.JSONDecodeError) as e:
+ bb.warn('Poorly formatted response from %s: %s' % (self.server, str(e)))
+
+ self.unitaskhashes[key] = unihash
+ return unihash
+
+ def report_unihash(self, path, task, d):
+ import urllib
+ import json
+ import tempfile
+ import base64
+ import importlib
+
+ taskhash = d.getVar('BB_TASKHASH')
+ unihash = d.getVar('BB_UNIHASH')
+ report_taskdata = d.getVar('SSTATE_HASHEQUIV_REPORT_TASKDATA') == '1'
+ tempdir = d.getVar('T')
+ fn = d.getVar('BB_FILENAME')
+ key = fn + ':do_' + task + ':' + taskhash
+
+ # Sanity checks
+ cache_unihash = self.unitaskhashes.get(key, None)
+ if cache_unihash is None:
+ bb.fatal('%s not in unihash cache. Please report this error' % key)
+
+ if cache_unihash != unihash:
+ bb.fatal("Cache unihash %s doesn't match BB_UNIHASH %s" % (cache_unihash, unihash))
+
+ sigfile = None
+ sigfile_name = "depsig.do_%s.%d" % (task, os.getpid())
+ sigfile_link = "depsig.do_%s" % task
+
+ try:
+ sigfile = open(os.path.join(tempdir, sigfile_name), 'w+b')
+
+ locs = {'path': path, 'sigfile': sigfile, 'task': task, 'd': d}
+
+ if "." in self.method:
+ (module, method) = self.method.rsplit('.', 1)
+ locs['method'] = getattr(importlib.import_module(module), method)
+ outhash = bb.utils.better_eval('method(path, sigfile, task, d)', locs)
+ else:
+ outhash = bb.utils.better_eval(self.method + '(path, sigfile, task, d)', locs)
+
+ try:
+ url = '%s/v1/equivalent' % self.server
+ task_data = {
+ 'taskhash': taskhash,
+ 'method': self.method,
+ 'outhash': outhash,
+ 'unihash': unihash,
+ 'owner': d.getVar('SSTATE_HASHEQUIV_OWNER')
+ }
+
+ if report_taskdata:
+ sigfile.seek(0)
+
+ task_data['PN'] = d.getVar('PN')
+ task_data['PV'] = d.getVar('PV')
+ task_data['PR'] = d.getVar('PR')
+ task_data['task'] = task
+ task_data['outhash_siginfo'] = sigfile.read().decode('utf-8')
+
+ headers = {'content-type': 'application/json'}
+
+ request = urllib.request.Request(url, json.dumps(task_data).encode('utf-8'), headers)
+ response = urllib.request.urlopen(request)
+ data = response.read().decode('utf-8')
+
+ json_data = json.loads(data)
+ new_unihash = json_data['unihash']
+
+ if new_unihash != unihash:
+ bb.debug(1, 'Task %s unihash changed %s -> %s by server %s' % (taskhash, unihash, new_unihash, self.server))
+ bb.event.fire(bb.runqueue.taskUniHashUpdate(fn + ':do_' + task, new_unihash), d)
+ else:
+ bb.debug(1, 'Reported task %s as unihash %s to %s' % (taskhash, unihash, self.server))
+ except urllib.error.URLError as e:
+ bb.warn('Failure contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
+ except (KeyError, json.JSONDecodeError) as e:
+ bb.warn('Poorly formatted response from %s: %s' % (self.server, str(e)))
+ finally:
+ if sigfile:
+ sigfile.close()
+
+ sigfile_link_path = os.path.join(tempdir, sigfile_link)
+ bb.utils.remove(sigfile_link_path)
+
+ try:
+ os.symlink(sigfile_name, sigfile_link_path)
+ except OSError:
+ pass
+
+
+#
+# Dummy class used for bitbake-selftest
+#
+class SignatureGeneratorTestEquivHash(SignatureGeneratorUniHashMixIn, SignatureGeneratorBasicHash):
+ name = "TestEquivHash"
+ def init_rundepcheck(self, data):
+ super().init_rundepcheck(data)
+ self.server = "http://" + data.getVar('BB_HASHSERVE')
+ self.method = "sstate_output_hash"
+
+
def dump_this_task(outfile, d):
import bb.parse
fn = d.getVar("BB_FILENAME")
diff --git a/poky/bitbake/lib/bb/tests/data.py b/poky/bitbake/lib/bb/tests/data.py
index 3cf5abe..a9b0bdb 100644
--- a/poky/bitbake/lib/bb/tests/data.py
+++ b/poky/bitbake/lib/bb/tests/data.py
@@ -466,7 +466,7 @@
tasklist, gendeps, lookupcache = bb.data.generate_dependencies(d)
taskdeps, basehash = bb.data.generate_dependency_hash(tasklist, gendeps, lookupcache, set(), "somefile")
bb.warn(str(lookupcache))
- return basehash["somefile." + taskname]
+ return basehash["somefile:" + taskname]
d = bb.data.init()
d.setVar("__BBTASKS", ["mytask"])
diff --git a/poky/bitbake/lib/bb/tests/runqueue-tests/classes/base.bbclass b/poky/bitbake/lib/bb/tests/runqueue-tests/classes/base.bbclass
index 5b87e20..b57650d 100644
--- a/poky/bitbake/lib/bb/tests/runqueue-tests/classes/base.bbclass
+++ b/poky/bitbake/lib/bb/tests/runqueue-tests/classes/base.bbclass
@@ -5,18 +5,30 @@
import time
thistask = d.expand("${PN}:${BB_CURRENTTASK}")
- with open(d.expand("${TOPDIR}/%s.run") % thistask, "a+") as f:
- f.write("\n")
+ stampname = d.expand("${TOPDIR}/%s.run" % thistask)
+ with open(stampname, "a+") as f:
+ f.write(d.getVar("BB_UNIHASH") + "\n")
if d.getVar("BB_CURRENT_MC") != "default":
thistask = d.expand("${BB_CURRENT_MC}:${PN}:${BB_CURRENTTASK}")
if thistask in d.getVar("SLOWTASKS").split():
bb.note("Slowing task %s" % thistask)
time.sleep(0.5)
+ if d.getVar("BB_HASHSERVE"):
+ task = d.getVar("BB_CURRENTTASK")
+ if task in ['package', 'package_qa', 'packagedata', 'package_write_ipk', 'package_write_rpm', 'populate_lic', 'populate_sysroot']:
+ bb.parse.siggen.report_unihash(os.getcwd(), d.getVar("BB_CURRENTTASK"), d)
with open(d.expand("${TOPDIR}/task.log"), "a+") as f:
f.write(thistask + "\n")
+
+def sstate_output_hash(path, sigfile, task, d):
+ import hashlib
+ h = hashlib.sha256()
+ h.update(d.expand("${PN}:${BB_CURRENTTASK}").encode('utf-8'))
+ return h.hexdigest()
+
python do_fetch() {
# fetch
stamptask(d)
@@ -216,27 +228,35 @@
BB_HASHCHECK_FUNCTION = "sstate_checkhashes"
-def sstate_checkhashes(sq_fn, sq_task, sq_hash, sq_hashfn, d, siginfo=False, *, sq_unihash=None):
+def sstate_checkhashes(sq_data, d, siginfo=False, currentcount=0, **kwargs):
- ret = []
- missed = []
+ found = set()
+ missed = set()
valid = d.getVar("SSTATEVALID").split()
- for task in range(len(sq_fn)):
- n = os.path.basename(sq_fn[task]).rsplit(".", 1)[0] + ":" + sq_task[task]
+ for tid in sorted(sq_data['hash']):
+ n = os.path.basename(bb.runqueue.fn_from_tid(tid)).split(".")[0] + ":do_" + bb.runqueue.taskname_from_tid(tid)[3:]
+ print(n)
+ stampfile = d.expand("${TOPDIR}/%s.run" % n.replace("do_", ""))
if n in valid:
bb.note("SState: Found valid sstate for %s" % n)
- ret.append(task)
- elif os.path.exists(d.expand("${TOPDIR}/%s.run" % n.replace("do_", ""))):
- bb.note("SState: Found valid sstate for %s (already run)" % n)
- ret.append(task)
+ found.add(tid)
+ elif n + ":" + sq_data['hash'][tid] in valid:
+ bb.note("SState: Found valid sstate for %s" % n)
+ found.add(tid)
+ elif os.path.exists(stampfile):
+ with open(stampfile, "r") as f:
+ hash = f.readline().strip()
+ if hash == sq_data['hash'][tid]:
+ bb.note("SState: Found valid sstate for %s (already run)" % n)
+ found.add(tid)
+ else:
+ bb.note("SState: sstate hash didn't match previous run for %s (%s vs %s)" % (n, sq_data['hash'][tid], hash))
+ missed.add(tid)
else:
- missed.append(task)
- bb.note("SState: Found no valid sstate for %s" % n)
+ missed.add(tid)
+ bb.note("SState: Found no valid sstate for %s (%s)" % (n, sq_data['hash'][tid]))
- if hasattr(bb.parse.siggen, "checkhashes"):
- bb.parse.siggen.checkhashes(missed, ret, sq_fn, sq_task, sq_hash, sq_hashfn, d)
-
- return ret
+ return found
diff --git a/poky/bitbake/lib/bb/tests/runqueue-tests/conf/bitbake.conf b/poky/bitbake/lib/bb/tests/runqueue-tests/conf/bitbake.conf
index 96ee1cd..5e451fc 100644
--- a/poky/bitbake/lib/bb/tests/runqueue-tests/conf/bitbake.conf
+++ b/poky/bitbake/lib/bb/tests/runqueue-tests/conf/bitbake.conf
@@ -11,6 +11,6 @@
T = "${TMPDIR}/workdir/${PN}/temp"
BB_NUMBER_THREADS = "4"
-BB_HASHBASE_WHITELIST = "BB_CURRENT_MC"
+BB_HASHBASE_WHITELIST = "BB_CURRENT_MC BB_HASHSERVE TMPDIR TOPDIR SLOWTASKS SSTATEVALID FILE"
include conf/multiconfig/${BB_CURRENT_MC}.conf
diff --git a/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/e1.bb b/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/e1.bb
new file mode 100644
index 0000000..1588bc8
--- /dev/null
+++ b/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/e1.bb
@@ -0,0 +1 @@
+DEPENDS = "b1"
\ No newline at end of file
diff --git a/poky/bitbake/lib/bb/tests/runqueue.py b/poky/bitbake/lib/bb/tests/runqueue.py
index f22ad4b..c7f5e55 100644
--- a/poky/bitbake/lib/bb/tests/runqueue.py
+++ b/poky/bitbake/lib/bb/tests/runqueue.py
@@ -25,7 +25,7 @@
a1_sstatevalid = "a1:do_package a1:do_package_qa a1:do_packagedata a1:do_package_write_ipk a1:do_package_write_rpm a1:do_populate_lic a1:do_populate_sysroot"
b1_sstatevalid = "b1:do_package b1:do_package_qa b1:do_packagedata b1:do_package_write_ipk b1:do_package_write_rpm b1:do_populate_lic b1:do_populate_sysroot"
- def run_bitbakecmd(self, cmd, builddir, sstatevalid="", slowtasks="", extraenv=None):
+ def run_bitbakecmd(self, cmd, builddir, sstatevalid="", slowtasks="", extraenv=None, cleanup=False):
env = os.environ.copy()
env["BBPATH"] = os.path.realpath(os.path.join(os.path.dirname(__file__), "runqueue-tests"))
env["BB_ENV_EXTRAWHITE"] = "SSTATEVALID SLOWTASKS"
@@ -37,11 +37,16 @@
env["BB_ENV_EXTRAWHITE"] = env["BB_ENV_EXTRAWHITE"] + " " + k
try:
output = subprocess.check_output(cmd, env=env, stderr=subprocess.STDOUT,universal_newlines=True, cwd=builddir)
+ print(output)
except subprocess.CalledProcessError as e:
self.fail("Command %s failed with %s" % (cmd, e.output))
tasks = []
- with open(builddir + "/task.log", "r") as f:
- tasks = [line.rstrip() for line in f]
+ tasklog = builddir + "/task.log"
+ if os.path.exists(tasklog):
+ with open(tasklog, "r") as f:
+ tasks = [line.rstrip() for line in f]
+ if cleanup:
+ os.remove(tasklog)
return tasks
def test_no_setscenevalid(self):
@@ -226,3 +231,166 @@
expected.remove(x)
self.assertEqual(set(tasks), set(expected))
+
+ def test_hashserv_single(self):
+ with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+ extraenv = {
+ "BB_HASHSERVE" : "localhost:0",
+ "BB_SIGNATURE_HANDLER" : "TestEquivHash"
+ }
+ cmd = ["bitbake", "a1", "b1"]
+ setscenetasks = ['package_write_ipk_setscene', 'package_write_rpm_setscene', 'packagedata_setscene',
+ 'populate_sysroot_setscene', 'package_qa_setscene']
+ sstatevalid = ""
+ tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv, cleanup=True)
+ expected = ['a1:' + x for x in self.alltasks] + ['b1:' + x for x in self.alltasks]
+ self.assertEqual(set(tasks), set(expected))
+ cmd = ["bitbake", "a1", "-c", "install", "-f"]
+ tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv, cleanup=True)
+ expected = ['a1:install']
+ self.assertEqual(set(tasks), set(expected))
+ cmd = ["bitbake", "a1", "b1"]
+ tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv, cleanup=True)
+ expected = ['a1:populate_sysroot', 'a1:package', 'a1:package_write_rpm_setscene', 'a1:packagedata_setscene',
+ 'a1:package_write_ipk_setscene', 'a1:package_qa_setscene']
+ self.assertEqual(set(tasks), set(expected))
+
+ def test_hashserv_double(self):
+ with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+ extraenv = {
+ "BB_HASHSERVE" : "localhost:0",
+ "BB_SIGNATURE_HANDLER" : "TestEquivHash"
+ }
+ cmd = ["bitbake", "a1", "b1", "e1"]
+ setscenetasks = ['package_write_ipk_setscene', 'package_write_rpm_setscene', 'packagedata_setscene',
+ 'populate_sysroot_setscene', 'package_qa_setscene']
+ sstatevalid = ""
+ tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv, cleanup=True)
+ expected = ['a1:' + x for x in self.alltasks] + ['b1:' + x for x in self.alltasks] + ['e1:' + x for x in self.alltasks]
+ self.assertEqual(set(tasks), set(expected))
+ cmd = ["bitbake", "a1", "b1", "-c", "install", "-fn"]
+ tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv, cleanup=True)
+ cmd = ["bitbake", "e1"]
+ tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv, cleanup=True)
+ expected = ['a1:package', 'a1:install', 'b1:package', 'b1:install', 'a1:populate_sysroot', 'b1:populate_sysroot',
+ 'a1:package_write_ipk_setscene', 'b1:packagedata_setscene', 'b1:package_write_rpm_setscene',
+ 'a1:package_write_rpm_setscene', 'b1:package_write_ipk_setscene', 'a1:packagedata_setscene']
+ self.assertEqual(set(tasks), set(expected))
+
+
+ def test_hashserv_multiple_setscene(self):
+ # Runs e1:do_package_setscene twice
+ with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+ extraenv = {
+ "BB_HASHSERVE" : "localhost:0",
+ "BB_SIGNATURE_HANDLER" : "TestEquivHash"
+ }
+ cmd = ["bitbake", "a1", "b1", "e1"]
+ setscenetasks = ['package_write_ipk_setscene', 'package_write_rpm_setscene', 'packagedata_setscene',
+ 'populate_sysroot_setscene', 'package_qa_setscene']
+ sstatevalid = ""
+ tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv, cleanup=True)
+ expected = ['a1:' + x for x in self.alltasks] + ['b1:' + x for x in self.alltasks] + ['e1:' + x for x in self.alltasks]
+ self.assertEqual(set(tasks), set(expected))
+ cmd = ["bitbake", "a1", "b1", "-c", "install", "-fn"]
+ tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv, cleanup=True)
+ cmd = ["bitbake", "e1"]
+ sstatevalid = "e1:do_package"
+ tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv, cleanup=True, slowtasks="a1:populate_sysroot b1:populate_sysroot")
+ expected = ['a1:package', 'a1:install', 'b1:package', 'b1:install', 'a1:populate_sysroot', 'b1:populate_sysroot',
+ 'a1:package_write_ipk_setscene', 'b1:packagedata_setscene', 'b1:package_write_rpm_setscene',
+ 'a1:package_write_rpm_setscene', 'b1:package_write_ipk_setscene', 'a1:packagedata_setscene',
+ 'e1:package_setscene']
+ self.assertEqual(set(tasks), set(expected))
+ for i in expected:
+ if i in ["e1:package_setscene"]:
+ self.assertEqual(tasks.count(i), 4, "%s not in task list four times" % i)
+ else:
+ self.assertEqual(tasks.count(i), 1, "%s not in task list once" % i)
+
+ def test_hashserv_partial_match(self):
+ # e1:do_package matches initial built but not second hash value
+ with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+ extraenv = {
+ "BB_HASHSERVE" : "localhost:0",
+ "BB_SIGNATURE_HANDLER" : "TestEquivHash"
+ }
+ cmd = ["bitbake", "a1", "b1"]
+ setscenetasks = ['package_write_ipk_setscene', 'package_write_rpm_setscene', 'packagedata_setscene',
+ 'populate_sysroot_setscene', 'package_qa_setscene']
+ sstatevalid = ""
+ tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv, cleanup=True)
+ expected = ['a1:' + x for x in self.alltasks] + ['b1:' + x for x in self.alltasks]
+ self.assertEqual(set(tasks), set(expected))
+ with open(tempdir + "/stamps/a1.do_install.taint", "w") as f:
+ f.write("d460a29e-903f-4b76-a96b-3bcc22a65994")
+ with open(tempdir + "/stamps/b1.do_install.taint", "w") as f:
+ f.write("ed36d46a-2977-458a-b3de-eef885bc1817")
+ cmd = ["bitbake", "e1"]
+ sstatevalid = "e1:do_package:685e69a026b2f029483fdefe6a11e1e06641dd2a0f6f86e27b9b550f8f21229d"
+ tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv, cleanup=True)
+ expected = ['a1:package', 'a1:install', 'b1:package', 'b1:install', 'a1:populate_sysroot', 'b1:populate_sysroot',
+ 'a1:package_write_ipk_setscene', 'b1:packagedata_setscene', 'b1:package_write_rpm_setscene',
+ 'a1:package_write_rpm_setscene', 'b1:package_write_ipk_setscene', 'a1:packagedata_setscene',
+ 'e1:package_setscene'] + ['e1:' + x for x in self.alltasks]
+ expected.remove('e1:package')
+ self.assertEqual(set(tasks), set(expected))
+
+ def test_hashserv_partial_match2(self):
+ # e1:do_package + e1:do_populate_sysroot matches initial built but not second hash value
+ with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+ extraenv = {
+ "BB_HASHSERVE" : "localhost:0",
+ "BB_SIGNATURE_HANDLER" : "TestEquivHash"
+ }
+ cmd = ["bitbake", "a1", "b1"]
+ setscenetasks = ['package_write_ipk_setscene', 'package_write_rpm_setscene', 'packagedata_setscene',
+ 'populate_sysroot_setscene', 'package_qa_setscene']
+ sstatevalid = ""
+ tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv, cleanup=True)
+ expected = ['a1:' + x for x in self.alltasks] + ['b1:' + x for x in self.alltasks]
+ self.assertEqual(set(tasks), set(expected))
+ with open(tempdir + "/stamps/a1.do_install.taint", "w") as f:
+ f.write("d460a29e-903f-4b76-a96b-3bcc22a65994")
+ with open(tempdir + "/stamps/b1.do_install.taint", "w") as f:
+ f.write("ed36d46a-2977-458a-b3de-eef885bc1817")
+ cmd = ["bitbake", "e1"]
+ sstatevalid = "e1:do_package:685e69a026b2f029483fdefe6a11e1e06641dd2a0f6f86e27b9b550f8f21229d e1:do_populate_sysroot:ef7dc0e2dd55d0534e75cba50731ff42f949818b6f29a65d72bc05856e56711d"
+ tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv, cleanup=True)
+ expected = ['a1:package', 'a1:install', 'b1:package', 'b1:install', 'a1:populate_sysroot', 'b1:populate_sysroot',
+ 'a1:package_write_ipk_setscene', 'b1:packagedata_setscene', 'b1:package_write_rpm_setscene',
+ 'a1:package_write_rpm_setscene', 'b1:package_write_ipk_setscene', 'a1:packagedata_setscene',
+ 'e1:package_setscene', 'e1:populate_sysroot_setscene', 'e1:build', 'e1:package_qa', 'e1:package_write_rpm', 'e1:package_write_ipk', 'e1:packagedata']
+ self.assertEqual(set(tasks), set(expected))
+
+ def test_hashserv_partial_match3(self):
+ # e1:do_package is valid for a1 but not after b1
+ # In former buggy code, this triggered e1:do_fetch, then e1:do_populate_sysroot to run
+ # with none of the intermediate tasks which is a serious bug
+ with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+ extraenv = {
+ "BB_HASHSERVE" : "localhost:0",
+ "BB_SIGNATURE_HANDLER" : "TestEquivHash"
+ }
+ cmd = ["bitbake", "a1", "b1"]
+ setscenetasks = ['package_write_ipk_setscene', 'package_write_rpm_setscene', 'packagedata_setscene',
+ 'populate_sysroot_setscene', 'package_qa_setscene']
+ sstatevalid = ""
+ tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv, cleanup=True)
+ expected = ['a1:' + x for x in self.alltasks] + ['b1:' + x for x in self.alltasks]
+ self.assertEqual(set(tasks), set(expected))
+ with open(tempdir + "/stamps/a1.do_install.taint", "w") as f:
+ f.write("d460a29e-903f-4b76-a96b-3bcc22a65994")
+ with open(tempdir + "/stamps/b1.do_install.taint", "w") as f:
+ f.write("ed36d46a-2977-458a-b3de-eef885bc1817")
+ cmd = ["bitbake", "e1", "-DD"]
+ sstatevalid = "e1:do_package:af056eae12a733a6a8c4f4da8c6757e588e13565852c94e2aad4d953a3989c13 e1:do_package:a3677703db82b22d28d57c1820a47851dd780104580863f5bd32e66e003a779d"
+ tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv, cleanup=True, slowtasks="e1:fetch b1:install")
+ expected = ['a1:package', 'a1:install', 'b1:package', 'b1:install', 'a1:populate_sysroot', 'b1:populate_sysroot',
+ 'a1:package_write_ipk_setscene', 'b1:packagedata_setscene', 'b1:package_write_rpm_setscene',
+ 'a1:package_write_rpm_setscene', 'b1:package_write_ipk_setscene', 'a1:packagedata_setscene',
+ 'e1:package_setscene'] + ['e1:' + x for x in self.alltasks]
+ expected.remove('e1:package')
+ self.assertEqual(set(tasks), set(expected))
+
+
diff --git a/poky/bitbake/lib/bb/ui/knotty.py b/poky/bitbake/lib/bb/ui/knotty.py
index 1c72aa2..35736ad 100644
--- a/poky/bitbake/lib/bb/ui/knotty.py
+++ b/poky/bitbake/lib/bb/ui/knotty.py
@@ -689,17 +689,27 @@
if params.observe_only:
print("\nKeyboard Interrupt, exiting observer...")
main.shutdown = 2
- if not params.observe_only and main.shutdown == 1:
+
+ def state_force_shutdown():
print("\nSecond Keyboard Interrupt, stopping...\n")
_, error = server.runCommand(["stateForceShutdown"])
if error:
logger.error("Unable to cleanly stop: %s" % error)
+
+ if not params.observe_only and main.shutdown == 1:
+ state_force_shutdown()
+
if not params.observe_only and main.shutdown == 0:
print("\nKeyboard Interrupt, closing down...\n")
interrupted = True
- _, error = server.runCommand(["stateShutdown"])
- if error:
- logger.error("Unable to cleanly shutdown: %s" % error)
+ # Capture the second KeyboardInterrupt during stateShutdown is running
+ try:
+ _, error = server.runCommand(["stateShutdown"])
+ if error:
+ logger.error("Unable to cleanly shutdown: %s" % error)
+ except KeyboardInterrupt:
+ state_force_shutdown()
+
main.shutdown = main.shutdown + 1
pass
except Exception as e:
diff --git a/poky/bitbake/lib/bb/utils.py b/poky/bitbake/lib/bb/utils.py
index ed19825..0618e46 100644
--- a/poky/bitbake/lib/bb/utils.py
+++ b/poky/bitbake/lib/bb/utils.py
@@ -394,7 +394,7 @@
code = better_compile(code, realfile, realfile)
try:
exec(code, get_context(), context)
- except (bb.BBHandledException, bb.parse.SkipRecipe, bb.build.FuncFailed, bb.data_smart.ExpansionError):
+ except (bb.BBHandledException, bb.parse.SkipRecipe, bb.data_smart.ExpansionError):
# Error already shown so passthrough, no need for traceback
raise
except Exception as e:
diff --git a/poky/bitbake/lib/hashserv/__init__.py b/poky/bitbake/lib/hashserv/__init__.py
index fdc9ced..eb03c32 100644
--- a/poky/bitbake/lib/hashserv/__init__.py
+++ b/poky/bitbake/lib/hashserv/__init__.py
@@ -1,4 +1,4 @@
-# Copyright (C) 2018 Garmin Ltd.
+# Copyright (C) 2018-2019 Garmin Ltd.
#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -10,6 +10,12 @@
import json
import traceback
import logging
+import socketserver
+import queue
+import threading
+import signal
+import socket
+import struct
from datetime import datetime
logger = logging.getLogger('hashserv')
@@ -18,8 +24,17 @@
def log_message(self, f, *args):
logger.debug(f, *args)
+ def opendb(self):
+ self.db = sqlite3.connect(self.dbname)
+ self.db.row_factory = sqlite3.Row
+ self.db.execute("PRAGMA synchronous = OFF;")
+ self.db.execute("PRAGMA journal_mode = MEMORY;")
+
def do_GET(self):
try:
+ if not self.db:
+ self.opendb()
+
p = urllib.parse.urlparse(self.path)
if p.path != self.prefix + '/v1/equivalent':
@@ -32,7 +47,7 @@
d = None
with contextlib.closing(self.db.cursor()) as cursor:
- cursor.execute('SELECT taskhash, method, unihash FROM tasks_v1 WHERE method=:method AND taskhash=:taskhash ORDER BY created ASC LIMIT 1',
+ cursor.execute('SELECT taskhash, method, unihash FROM tasks_v2 WHERE method=:method AND taskhash=:taskhash ORDER BY created ASC LIMIT 1',
{'method': method, 'taskhash': taskhash})
row = cursor.fetchone()
@@ -52,6 +67,9 @@
def do_POST(self):
try:
+ if not self.db:
+ self.opendb()
+
p = urllib.parse.urlparse(self.path)
if p.path != self.prefix + '/v1/equivalent':
@@ -63,15 +81,29 @@
with contextlib.closing(self.db.cursor()) as cursor:
cursor.execute('''
- SELECT taskhash, method, unihash FROM tasks_v1 WHERE method=:method AND outhash=:outhash
+ -- Find tasks with a matching outhash (that is, tasks that
+ -- are equivalent)
+ SELECT taskhash, method, unihash FROM tasks_v2 WHERE method=:method AND outhash=:outhash
+
+ -- If there is an exact match on the taskhash, return it.
+ -- Otherwise return the oldest matching outhash of any
+ -- taskhash
ORDER BY CASE WHEN taskhash=:taskhash THEN 1 ELSE 2 END,
created ASC
+
+ -- Only return one row
LIMIT 1
''', {k: data[k] for k in ('method', 'outhash', 'taskhash')})
row = cursor.fetchone()
+ # If no matching outhash was found, or one *was* found but it
+ # wasn't an exact match on the taskhash, a new entry for this
+ # taskhash should be added
if row is None or row['taskhash'] != data['taskhash']:
+ # If a row matching the outhash was found, the unihash for
+ # the new taskhash should be the same as that one.
+ # Otherwise the caller provided unihash is used.
unihash = data['unihash']
if row is not None:
unihash = row['unihash']
@@ -88,18 +120,17 @@
if k in data:
insert_data[k] = data[k]
- cursor.execute('''INSERT INTO tasks_v1 (%s) VALUES (%s)''' % (
+ cursor.execute('''INSERT INTO tasks_v2 (%s) VALUES (%s)''' % (
', '.join(sorted(insert_data.keys())),
', '.join(':' + k for k in sorted(insert_data.keys()))),
insert_data)
logger.info('Adding taskhash %s with unihash %s', data['taskhash'], unihash)
- cursor.execute('SELECT taskhash, method, unihash FROM tasks_v1 WHERE id=:id', {'id': cursor.lastrowid})
- row = cursor.fetchone()
self.db.commit()
-
- d = {k: row[k] for k in ('taskhash', 'method', 'unihash')}
+ d = {'taskhash': data['taskhash'], 'method': data['method'], 'unihash': unihash}
+ else:
+ d = {k: row[k] for k in ('taskhash', 'method', 'unihash')}
self.send_response(200)
self.send_header('Content-Type', 'application/json; charset=utf-8')
@@ -110,17 +141,67 @@
self.send_error(400, explain=traceback.format_exc())
return
-def create_server(addr, db, prefix=''):
+class ThreadedHTTPServer(HTTPServer):
+ quit = False
+
+ def serve_forever(self):
+ self.requestqueue = queue.Queue()
+ self.handlerthread = threading.Thread(target=self.process_request_thread)
+ self.handlerthread.daemon = False
+
+ self.handlerthread.start()
+
+ signal.signal(signal.SIGTERM, self.sigterm_exception)
+ super().serve_forever()
+ os._exit(0)
+
+ def sigterm_exception(self, signum, stackframe):
+ self.server_close()
+ os._exit(0)
+
+ def server_bind(self):
+ HTTPServer.server_bind(self)
+ self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0))
+
+ def process_request_thread(self):
+ while not self.quit:
+ try:
+ (request, client_address) = self.requestqueue.get(True)
+ except queue.Empty:
+ continue
+ if request is None:
+ continue
+ try:
+ self.finish_request(request, client_address)
+ except Exception:
+ self.handle_error(request, client_address)
+ finally:
+ self.shutdown_request(request)
+ os._exit(0)
+
+ def process_request(self, request, client_address):
+ self.requestqueue.put((request, client_address))
+
+ def server_close(self):
+ super().server_close()
+ self.quit = True
+ self.requestqueue.put((None, None))
+ self.handlerthread.join()
+
+def create_server(addr, dbname, prefix=''):
class Handler(HashEquivalenceServer):
pass
- Handler.prefix = prefix
- Handler.db = db
+ db = sqlite3.connect(dbname)
db.row_factory = sqlite3.Row
+ Handler.prefix = prefix
+ Handler.db = None
+ Handler.dbname = dbname
+
with contextlib.closing(db.cursor()) as cursor:
cursor.execute('''
- CREATE TABLE IF NOT EXISTS tasks_v1 (
+ CREATE TABLE IF NOT EXISTS tasks_v2 (
id INTEGER PRIMARY KEY AUTOINCREMENT,
method TEXT NOT NULL,
outhash TEXT NOT NULL,
@@ -134,9 +215,16 @@
PV TEXT,
PR TEXT,
task TEXT,
- outhash_siginfo TEXT
+ outhash_siginfo TEXT,
+
+ UNIQUE(method, outhash, taskhash)
)
''')
+ cursor.execute('CREATE INDEX IF NOT EXISTS taskhash_lookup ON tasks_v2 (method, taskhash)')
+ cursor.execute('CREATE INDEX IF NOT EXISTS outhash_lookup ON tasks_v2 (method, outhash)')
- logger.info('Starting server on %s', addr)
- return HTTPServer(addr, Handler)
+ ret = ThreadedHTTPServer(addr, Handler)
+
+ logger.info('Starting server on %s\n', ret.server_port)
+
+ return ret
diff --git a/poky/bitbake/lib/hashserv/tests.py b/poky/bitbake/lib/hashserv/tests.py
index 8300a25..6845b53 100644
--- a/poky/bitbake/lib/hashserv/tests.py
+++ b/poky/bitbake/lib/hashserv/tests.py
@@ -6,30 +6,31 @@
#
import unittest
-import threading
+import multiprocessing
import sqlite3
import hashlib
import urllib.request
import json
+import tempfile
from . import create_server
class TestHashEquivalenceServer(unittest.TestCase):
def setUp(self):
- # Start an in memory hash equivalence server in the background bound to
+ # Start a hash equivalence server in the background bound to
# an ephemeral port
- db = sqlite3.connect(':memory:', check_same_thread=False)
- self.server = create_server(('localhost', 0), db)
+ self.dbfile = tempfile.NamedTemporaryFile(prefix="bb-hashserv-db-")
+ self.server = create_server(('localhost', 0), self.dbfile.name)
self.server_addr = 'http://localhost:%d' % self.server.socket.getsockname()[1]
- self.server_thread = threading.Thread(target=self.server.serve_forever)
+ self.server_thread = multiprocessing.Process(target=self.server.serve_forever)
+ self.server_thread.daemon = True
self.server_thread.start()
def tearDown(self):
# Shutdown server
s = getattr(self, 'server', None)
if s is not None:
- self.server.shutdown()
+ self.server_thread.terminate()
self.server_thread.join()
- self.server.server_close()
def send_get(self, path):
url = '%s/%s' % (self.server_addr, path)
diff --git a/poky/bitbake/lib/layerindexlib/__init__.py b/poky/bitbake/lib/layerindexlib/__init__.py
index d231cf6..77196b4 100644
--- a/poky/bitbake/lib/layerindexlib/__init__.py
+++ b/poky/bitbake/lib/layerindexlib/__init__.py
@@ -376,7 +376,7 @@
invalid.append(name)
- def _resolve_dependencies(layerbranches, ignores, dependencies, invalid):
+ def _resolve_dependencies(layerbranches, ignores, dependencies, invalid, processed=None):
for layerbranch in layerbranches:
if ignores and layerbranch.layer.name in ignores:
continue
@@ -388,6 +388,13 @@
if ignores and deplayerbranch.layer.name in ignores:
continue
+ # Since this is depth first, we need to know what we're currently processing
+ # in order to avoid infinite recursion on a loop.
+ if processed and deplayerbranch.layer.name in processed:
+ # We have found a recursion...
+ logger.warning('Circular layer dependency found: %s -> %s' % (processed, deplayerbranch.layer.name))
+ continue
+
# This little block is why we can't re-use the LayerIndexObj version,
# we must be able to satisfy each dependencies across layer indexes and
# use the layer index order for priority. (r stands for replacement below)
@@ -411,7 +418,17 @@
# New dependency, we need to resolve it now... depth-first
if deplayerbranch.layer.name not in dependencies:
- (dependencies, invalid) = _resolve_dependencies([deplayerbranch], ignores, dependencies, invalid)
+ # Avoid recursion on this branch.
+ # We copy so we don't end up polluting the depth-first branch with other
+ # branches. Duplication between individual branches IS expected and
+ # handled by 'dependencies' processing.
+ if not processed:
+ local_processed = []
+ else:
+ local_processed = processed.copy()
+ local_processed.append(deplayerbranch.layer.name)
+
+ (dependencies, invalid) = _resolve_dependencies([deplayerbranch], ignores, dependencies, invalid, local_processed)
if deplayerbranch.layer.name not in dependencies:
dependencies[deplayerbranch.layer.name] = [deplayerbranch, layerdependency]
diff --git a/poky/bitbake/lib/prserv/db.py b/poky/bitbake/lib/prserv/db.py
index d6188a6..117d8c0 100644
--- a/poky/bitbake/lib/prserv/db.py
+++ b/poky/bitbake/lib/prserv/db.py
@@ -257,7 +257,7 @@
self.connection=sqlite3.connect(self.filename, isolation_level="EXCLUSIVE", check_same_thread = False)
self.connection.row_factory=sqlite3.Row
self.connection.execute("pragma synchronous = off;")
- self.connection.execute("PRAGMA journal_mode = WAL;")
+ self.connection.execute("PRAGMA journal_mode = MEMORY;")
self._tables={}
def disconnect(self):
diff --git a/poky/bitbake/lib/toaster/orm/models.py b/poky/bitbake/lib/toaster/orm/models.py
index 41a9f81..bb6b5de 100644
--- a/poky/bitbake/lib/toaster/orm/models.py
+++ b/poky/bitbake/lib/toaster/orm/models.py
@@ -965,12 +965,12 @@
class Target_Image_File(models.Model):
# valid suffixes for image files produced by a build
SUFFIXES = {
- 'btrfs', 'cpio', 'cpio.gz', 'cpio.lz4', 'cpio.lzma', 'cpio.xz',
- 'cramfs', 'elf', 'ext2', 'ext2.bz2', 'ext2.gz', 'ext2.lzma', 'ext4',
- 'ext4.gz', 'ext3', 'ext3.gz', 'hdddirect', 'hddimg', 'iso', 'jffs2',
- 'jffs2.sum', 'multiubi', 'qcow2', 'squashfs', 'squashfs-lzo',
+ 'btrfs', 'container', 'cpio', 'cpio.gz', 'cpio.lz4', 'cpio.lzma',
+ 'cpio.xz', 'cramfs', 'ext2', 'ext2.bz2', 'ext2.gz', 'ext2.lzma',
+ 'ext3', 'ext3.gz', 'ext4', 'ext4.gz', 'f2fs', 'hddimg', 'iso', 'jffs2',
+ 'jffs2.sum', 'multiubi', 'squashfs', 'squashfs-lz4', 'squashfs-lzo',
'squashfs-xz', 'tar', 'tar.bz2', 'tar.gz', 'tar.lz4', 'tar.xz', 'ubi',
- 'ubifs', 'vdi', 'vmdk', 'wic', 'wic.bmap', 'wic.bz2', 'wic.gz', 'wic.lzma'
+ 'ubifs', 'wic', 'wic.bz2', 'wic.gz', 'wic.lzma'
}
target = models.ForeignKey(Target)