poky: subtree update:5951cbcabe..968fcf4989

Alejandro Hernandez (3):
      baremetal-helloworld: Use do_image_complete instead of do_deploy
      baremetal-image.bbclass: Create a class for baremetal applications or an RTOS
      baremetal-helloworld: Use baremetal-image class to deploy the application

Alejandro del Castillo (2):
      opkg-utils: upgrade to 0.4.3
      opkg: upgrade to version 0.4.3

Alexander Kanavin (30):
      dnf: upgrade 4.2.21 -> 4.2.23
      meson: upgrade 0.54.2 -> 0.54.3
      libdnf: update 0.47.0 -> 0.48.0
      ffmpeg: disable altivec on ppc by default
      dropbear: update 2019.78 -> 2020.79
      elfutils: upgrade 0.179 -> 0.180
      gnu-config: update to latest revision
      libgpg-error: update 1.37 -> 1.38
      perl: update 5.30.2 -> 5.32.0
      gst-examples: upstream releases are even numbered
      bison: upgrade 3.6.3 -> 3.6.4
      python3-cython: upgrade 0.29.19 -> 0.29.20
      stress-ng: upgrade 0.11.12 -> 0.11.14
      piglit: upgrade to latest revision
      linux-firmware: upgrade 20200519 -> 20200619
      systemtap: upgrade 4.2 -> 4.3
      alsa-lib: upgrade 1.2.2 -> 1.2.3.1
      alsa-topology-conf: upgrade 1.2.2 -> 1.2.3
      alsa-ucm-conf: upgrade 1.2.2 -> 1.2.3
      alsa-utils: upgrade 1.2.2 -> 1.2.3
      puzzles: upgrade to latest revision
      diffoscope: upgrade 147 -> 148
      libcheck: upgrade 0.14.0 -> 0.15.0
      rsync: update 3.1.3 -> 3.2.1
      sudo: upgrade 1.9.0 -> 1.9.1
      python3-numpy: update 1.18.5 -> 1.19.0
      mesa: update 20.0.7 -> 20.1.2
      go-binary-native: fix upstream version check
      Revert "python3-setuptools: patch entrypoints for faster initialization"
      python3-setuptools: upgrade 47.1.1 -> 47.3.1

Alistair Francis (1):
      opensbi: Update to OpenSBI v0.8 release

Andreas Müller (3):
      nfs-utils: upgrade 2.4.3 -> 2.5.1
      ccache: merge ccache.inc into recipe
      ccache: upgrade 3.7.9 -> 3.7.10

Andrej Valek (2):
      busybox: 1.31.1 -> 1.32.0
      dropbear: update to 2020.80

Andrey Zhizhikin (1):
      kernel/yocto: fix search for defconfig from src_uri

Armin Kuster (1):
      wpa-supplicant: Security fix CVE-2020-12695

Bjarne Michelsen (1):
      devtool: default to empty string, if LIC_FILES_CHKSUM is not available

Bruce Ashfield (10):
      kernel/yocto: ensure that defconfigs are processed first
      linux-yocto/5.4: update to v5.4.45
      linux-yocto-rt/5.4: update to rt25
      linux-yocto/5.4: update to v5.4.46
      linux-yocto/5.4: update to v5.4.47
      linux-yocto/5.4: update to v5.4.49 and -rt28
      yocto-bsps: bump reference boards to v5.4.49
      linux-yocto/5.4: update to v5.4.50
      linux-yocto-dev: bump to 5.8-rc
      lttng-modules: bump devupstream to v2.12.1+

Changqing Li (5):
      xinit: add rxvt-unicode in RDEPENDS
      modutils-initscripts: update postinst
      initscripts: update postinst
      gtk-icon-cache.bbclass: add runtime dependency
      logrotate.py: fix testimage occasionally failure

Chen Qi (2):
      oescripts.py: fix typo
      oescripts: ignore whitespaces when comparing lines

Chris Laplante (2):
      bitbake: contrib/vim: synchronize from kergoth/vim-bitbake rev 4225ee8b4818d7e4696520567216a3a031c26f7d
      bitbake: ui/teamcity: don't use removed logging classes

Christian Eggers (1):
      libnl: Extend for native/nativesdk

Damian Wrobel (1):
      rootfs: do not let ldconfig to create symlinks

Daniel Klauer (2):
      uboot-sign: Refactor do_deploy prefunc to do_deploy_prepend
      deploy.bbclass: Clean DEPLOYDIR before do_deploy

David Khouya (2):
      bitbake: lib/ui/taskexp: Validate gi import
      bitbake: lib/ui/taskexp: Fix missing Gtk import

Hannu Lounento (1):
      openssl: move ${libdir}/[...]/openssl.cnf to ${PN}-conf

Hongxu Jia (1):
      iso-codes: switch upstream branch master -> main

Jason Wessel (1):
      runqemu: If using a vmtype image do not add the -no-reboot flag

Joe Slater (1):
      jquery: use ${S}

Joshua Watt (4):
      bitbake: hashserv: Chunkify large messages
      bitbake: siggen: Fix error when hash equivalence has an exception
      classes/archiver: run do_unpack_and_patch after do_preconfigure
      classes/archive: do_configure should not depend on do_ar_patched

Khem Raj (2):
      musl: Update to tip of master
      rxvt-unicode: Disable wtmp on musl

Konrad Weihmann (2):
      systemd: remove kernel-install from base pkg
      bitbake.conf: fix whitespace issues

Lee Chee Yang (3):
      json-c: fix CVE-2020-12762
      qemu: fix CVE-2020-10761
      oeqa/core/loader: refine regex to find module

Lili Li (1):
      kernel.bbclass: Fix Module.symvers support

Matt Madison (1):
      kernel.bbclass: add gzip-native to do_deploy dependencies

Max Krummenacher (2):
      cogl-1.0: : don't require eglmesaext.h
      cogl-1.0: cope with missing x11 headers

Mingli Yu (2):
      python3-libarchive-c: add the missing rdepends
      python3: add ldconfig rdepends for python3-ctypes

Nicolas Dechesne (1):
      checklayer: parse LAYERDEPENDS with bb.utils.explode_dep_versions2()

Pierre-Jean Texier (3):
      libubootenv: bump to revision 86bd30a
      curl: upgrade 7.71.0 -> 7.71.1
      diffoscope: upgrade 148 -> 150

Rahul Kumar (1):
      bzip2: Add test suite for bzip2

Rasmus Villemoes (1):
      coreutils: don't split stdbuf to own package with single-binary

Richard Purdie (13):
      pseudo: Switch to oe-core branch in git repo
      pseudo: merge in fixes for setfacl issue
      oeqa/selftest: Clean up separate builddir in success case when non-threaded
      populate_sdk_ext: Fix to use python3, not python
      bitbake: taskdata: Improve handling of regex in ASSUME_PROVIDED
      bitbake: runqueue: Avoid unpickle errors in rare cases
      bitbake: msg: Avoid issues where paths have relative components
      oeqa/selftest: recipetool/devtool: Avoid load_plugin test race
      oeqa/targetcontrol: Attempt to fix log closure warning message
      rootfs-postcommands: Improve/fix rootfs_check_host_user_contaminated
      spdx: Remove the class as its obsolete
      adwaita-icon-theme: Add missing license files to LIC_FILES_CHKSUM
      bitbake: server/process: Increase timeout for commands

Ross Burton (3):
      ovmf: build natively everywhere
      common-licenses: fix filename of BSD-2-Clause-Patent
      gtk+3: fix reproducible build failure

Timon Ulrich (2):
      kernel.bbclass: add lz4 dependency and fix the call to lz4
      kernel.bbclass: make dependency on lzop-native conditional

Vacek, Patrick (1):
      oeqa/core/loader: fix regex to include numbers

Wang Mingyu (1):
      gtk+3: upgrade 3.24.20 -> 3.24.21

Yanfei Xu (1):
      classes/kernel: Use a copy of image for kernel*.rpm if fs doesn't support symlinks

akuster (5):
      libuv: update to the last version in meta-oe
      bitbake: test/fetch: change to better svn source
      overview-manual: add SPDX license header
      mega-manual: Add SPDX license headers
      ref-manual: Add SPDX license headers

hongxu (2):
      qemu: switches from libcap to libcap-ng for PACAKGECONFIG virtfs
      cpio: add nativesdk support

zangrc (1):
      libjpeg-turbo:upgrade 2.0.4 -> 2.0.5

Signed-off-by: Andrew Geissler <geissonator@yahoo.com>
Change-Id: I41e066e5957aa74c9a24e86a6c214bcf96e9c46b
diff --git a/poky/bitbake/lib/bb/msg.py b/poky/bitbake/lib/bb/msg.py
index c0b344e..2d88c4e 100644
--- a/poky/bitbake/lib/bb/msg.py
+++ b/poky/bitbake/lib/bb/msg.py
@@ -280,7 +280,7 @@
     logconfig = copy.deepcopy(defaultconfig)
 
     if userconfigfile:
-        with open(userconfigfile, 'r') as f:
+        with open(os.path.normpath(userconfigfile), 'r') as f:
             if userconfigfile.endswith('.yml') or userconfigfile.endswith('.yaml'):
                 import yaml
                 userconfig = yaml.load(f)
diff --git a/poky/bitbake/lib/bb/runqueue.py b/poky/bitbake/lib/bb/runqueue.py
index adb34a8..02a261e 100644
--- a/poky/bitbake/lib/bb/runqueue.py
+++ b/poky/bitbake/lib/bb/runqueue.py
@@ -2967,7 +2967,12 @@
             while index != -1 and self.queue.startswith(b"<event>"):
                 try:
                     event = pickle.loads(self.queue[7:index])
-                except ValueError as e:
+                except (ValueError, pickle.UnpicklingError, AttributeError, IndexError) as e:
+                    if isinstance(e, pickle.UnpicklingError) and "truncated" in str(e):
+                        # The pickled data could contain "</event>" so search for the next occurance
+                        # unpickling again, this should be the only way an unpickle error could occur
+                        index = self.queue.find(b"</event>", index + 1)
+                        continue
                     bb.msg.fatal("RunQueue", "failed load pickle '%s': '%s'" % (e, self.queue[7:index]))
                 bb.event.fire_from_worker(event, self.d)
                 if isinstance(event, taskUniHashUpdate):
@@ -2979,7 +2984,7 @@
             while index != -1 and self.queue.startswith(b"<exitcode>"):
                 try:
                     task, status = pickle.loads(self.queue[10:index])
-                except ValueError as e:
+                except (ValueError, pickle.UnpicklingError, AttributeError, IndexError) as e:
                     bb.msg.fatal("RunQueue", "failed load pickle '%s': '%s'" % (e, self.queue[10:index]))
                 self.rqexec.runqueue_process_waitpid(task, status)
                 found = True
diff --git a/poky/bitbake/lib/bb/server/process.py b/poky/bitbake/lib/bb/server/process.py
index 69aae62..83385ba 100644
--- a/poky/bitbake/lib/bb/server/process.py
+++ b/poky/bitbake/lib/bb/server/process.py
@@ -331,7 +331,9 @@
     def runCommand(self, command):
         self.connection.send(command)
         if not self.recv.poll(30):
-            raise ProcessTimeout("Timeout while waiting for a reply from the bitbake server")
+            logger.note("No reply from server in 30s")
+            if not self.recv.poll(30):
+                raise ProcessTimeout("Timeout while waiting for a reply from the bitbake server (60s)")
         return self.recv.get()
 
     def updateFeatureSet(self, featureset):
diff --git a/poky/bitbake/lib/bb/siggen.py b/poky/bitbake/lib/bb/siggen.py
index 872333d..4c63b0b 100644
--- a/poky/bitbake/lib/bb/siggen.py
+++ b/poky/bitbake/lib/bb/siggen.py
@@ -14,6 +14,7 @@
 from bb.checksum import FileChecksumCache
 from bb import runqueue
 import hashserv
+import hashserv.client
 
 logger = logging.getLogger('BitBake.SigGen')
 hashequiv_logger = logging.getLogger('BitBake.SigGen.HashEquiv')
diff --git a/poky/bitbake/lib/bb/taskdata.py b/poky/bitbake/lib/bb/taskdata.py
index d13a124..ffbaf36 100644
--- a/poky/bitbake/lib/bb/taskdata.py
+++ b/poky/bitbake/lib/bb/taskdata.py
@@ -21,8 +21,13 @@
     Whether or not the string 'target' matches
     any one string of the strings which can be regular expression string
     """
-    return any(name == target or re.match(name, target)
-               for name in strings)
+    for name in strings:
+        if name.startswith("^") or name.endswith("$"):
+            if re.match(name, target):
+                return True
+        elif name == target:
+            return True
+    return False
 
 class TaskEntry:
     def __init__(self):
diff --git a/poky/bitbake/lib/bb/tests/fetch.py b/poky/bitbake/lib/bb/tests/fetch.py
index 4697ef5..29c96b2 100644
--- a/poky/bitbake/lib/bb/tests/fetch.py
+++ b/poky/bitbake/lib/bb/tests/fetch.py
@@ -1031,7 +1031,7 @@
 
         bb.process.run("svn co %s svnfetch_co" % self.repo_url, cwd=self.tempdir)
         # Github will emulate SVN.  Use this to check if we're downloding...
-        bb.process.run("svn propset svn:externals 'bitbake http://github.com/openembedded/bitbake' .",
+        bb.process.run("svn propset svn:externals 'bitbake svn://vcs.pcre.org/pcre2/code' .",
                        cwd=os.path.join(self.tempdir, 'svnfetch_co', 'trunk'))
         bb.process.run("svn commit --non-interactive -m 'Add external'",
                        cwd=os.path.join(self.tempdir, 'svnfetch_co', 'trunk'))
diff --git a/poky/bitbake/lib/bb/ui/taskexp.py b/poky/bitbake/lib/bb/ui/taskexp.py
index 8fff244..05e3233 100644
--- a/poky/bitbake/lib/bb/ui/taskexp.py
+++ b/poky/bitbake/lib/bb/ui/taskexp.py
@@ -8,9 +8,16 @@
 #
 
 import sys
-import gi
-gi.require_version('Gtk', '3.0')
-from gi.repository import Gtk, Gdk, GObject
+
+try:
+    import gi
+    gi.require_version('Gtk', '3.0')
+    from gi.repository import Gtk, Gdk, GObject
+except ValueError:
+    sys.exit("FATAL: Gtk version needs to be 3.0")
+except ImportError:
+    sys.exit("FATAL: Gtk ui could not load the required gi python module")
+
 import threading
 from xmlrpc import client
 import bb
diff --git a/poky/bitbake/lib/bb/ui/teamcity.py b/poky/bitbake/lib/bb/ui/teamcity.py
index 1854292..fca46c2 100644
--- a/poky/bitbake/lib/bb/ui/teamcity.py
+++ b/poky/bitbake/lib/bb/ui/teamcity.py
@@ -167,8 +167,6 @@
         forcelevel = bb.msg.BBLogFormatter.ERROR
     else:
         forcelevel = bb.msg.BBLogFormatter.WARNING
-    bb.msg.addDefaultlogFilter(console, bb.msg.BBLogFilterStdOut, forcelevel)
-    bb.msg.addDefaultlogFilter(errconsole, bb.msg.BBLogFilterStdErr)
     console.setFormatter(format)
     errconsole.setFormatter(format)
     if not bb.msg.has_console_handler(logger):
diff --git a/poky/bitbake/lib/hashserv/__init__.py b/poky/bitbake/lib/hashserv/__init__.py
index c331862..f95e8f4 100644
--- a/poky/bitbake/lib/hashserv/__init__.py
+++ b/poky/bitbake/lib/hashserv/__init__.py
@@ -6,12 +6,20 @@
 from contextlib import closing
 import re
 import sqlite3
+import itertools
+import json
 
 UNIX_PREFIX = "unix://"
 
 ADDR_TYPE_UNIX = 0
 ADDR_TYPE_TCP = 1
 
+# The Python async server defaults to a 64K receive buffer, so we hardcode our
+# maximum chunk size. It would be better if the client and server reported to
+# each other what the maximum chunk sizes were, but that will slow down the
+# connection setup with a round trip delay so I'd rather not do that unless it
+# is necessary
+DEFAULT_MAX_CHUNK = 32 * 1024
 
 def setup_database(database, sync=True):
     db = sqlite3.connect(database)
@@ -66,6 +74,20 @@
         return (ADDR_TYPE_TCP, (host, int(port)))
 
 
+def chunkify(msg, max_chunk):
+    if len(msg) < max_chunk - 1:
+        yield ''.join((msg, "\n"))
+    else:
+        yield ''.join((json.dumps({
+                'chunk-stream': None
+            }), "\n"))
+
+        args = [iter(msg)] * (max_chunk - 1)
+        for m in map(''.join, itertools.zip_longest(*args, fillvalue='')):
+            yield ''.join(itertools.chain(m, "\n"))
+        yield "\n"
+
+
 def create_server(addr, dbname, *, sync=True):
     from . import server
     db = setup_database(dbname, sync=sync)
diff --git a/poky/bitbake/lib/hashserv/client.py b/poky/bitbake/lib/hashserv/client.py
index 46085d6..a29af83 100644
--- a/poky/bitbake/lib/hashserv/client.py
+++ b/poky/bitbake/lib/hashserv/client.py
@@ -7,6 +7,7 @@
 import logging
 import socket
 import os
+from . import chunkify, DEFAULT_MAX_CHUNK
 
 
 logger = logging.getLogger('hashserv.client')
@@ -25,6 +26,7 @@
         self.reader = None
         self.writer = None
         self.mode = self.MODE_NORMAL
+        self.max_chunk = DEFAULT_MAX_CHUNK
 
     def connect_tcp(self, address, port):
         def connect_sock():
@@ -58,7 +60,7 @@
             self.reader = self._socket.makefile('r', encoding='utf-8')
             self.writer = self._socket.makefile('w', encoding='utf-8')
 
-            self.writer.write('OEHASHEQUIV 1.0\n\n')
+            self.writer.write('OEHASHEQUIV 1.1\n\n')
             self.writer.flush()
 
             # Restore mode if the socket is being re-created
@@ -91,18 +93,35 @@
                 count += 1
 
     def send_message(self, msg):
-        def proc():
-            self.writer.write('%s\n' % json.dumps(msg))
-            self.writer.flush()
-
-            l = self.reader.readline()
-            if not l:
+        def get_line():
+            line = self.reader.readline()
+            if not line:
                 raise HashConnectionError('Connection closed')
 
-            if not l.endswith('\n'):
+            if not line.endswith('\n'):
                 raise HashConnectionError('Bad message %r' % message)
 
-            return json.loads(l)
+            return line
+
+        def proc():
+            for c in chunkify(json.dumps(msg), self.max_chunk):
+                self.writer.write(c)
+            self.writer.flush()
+
+            l = get_line()
+
+            m = json.loads(l)
+            if 'chunk-stream' in m:
+                lines = []
+                while True:
+                    l = get_line().rstrip('\n')
+                    if not l:
+                        break
+                    lines.append(l)
+
+                m = json.loads(''.join(lines))
+
+            return m
 
         return self._send_wrapper(proc)
 
@@ -155,6 +174,14 @@
         m['unihash'] = unihash
         return self.send_message({'report-equiv': m})
 
+    def get_taskhash(self, method, taskhash, all_properties=False):
+        self._set_mode(self.MODE_NORMAL)
+        return self.send_message({'get': {
+            'taskhash': taskhash,
+            'method': method,
+            'all': all_properties
+        }})
+
     def get_stats(self):
         self._set_mode(self.MODE_NORMAL)
         return self.send_message({'get-stats': None})
diff --git a/poky/bitbake/lib/hashserv/server.py b/poky/bitbake/lib/hashserv/server.py
index cc7e482..8105071 100644
--- a/poky/bitbake/lib/hashserv/server.py
+++ b/poky/bitbake/lib/hashserv/server.py
@@ -13,6 +13,7 @@
 import signal
 import socket
 import time
+from . import chunkify, DEFAULT_MAX_CHUNK
 
 logger = logging.getLogger('hashserv.server')
 
@@ -107,12 +108,29 @@
         return {k: getattr(self, k) for k in ('num', 'total_time', 'max_time', 'average', 'stdev')}
 
 
+class ClientError(Exception):
+    pass
+
 class ServerClient(object):
+    FAST_QUERY = 'SELECT taskhash, method, unihash FROM tasks_v2 WHERE method=:method AND taskhash=:taskhash ORDER BY created ASC LIMIT 1'
+    ALL_QUERY =  'SELECT *                         FROM tasks_v2 WHERE method=:method AND taskhash=:taskhash ORDER BY created ASC LIMIT 1'
+
     def __init__(self, reader, writer, db, request_stats):
         self.reader = reader
         self.writer = writer
         self.db = db
         self.request_stats = request_stats
+        self.max_chunk = DEFAULT_MAX_CHUNK
+
+        self.handlers = {
+            'get': self.handle_get,
+            'report': self.handle_report,
+            'report-equiv': self.handle_equivreport,
+            'get-stream': self.handle_get_stream,
+            'get-stats': self.handle_get_stats,
+            'reset-stats': self.handle_reset_stats,
+            'chunk-stream': self.handle_chunk,
+        }
 
     async def process_requests(self):
         try:
@@ -125,7 +143,11 @@
                 return
 
             (proto_name, proto_version) = protocol.decode('utf-8').rstrip().split()
-            if proto_name != 'OEHASHEQUIV' or proto_version != '1.0':
+            if proto_name != 'OEHASHEQUIV':
+                return
+
+            proto_version = tuple(int(v) for v in proto_version.split('.'))
+            if proto_version < (1, 0) or proto_version > (1, 1):
                 return
 
             # Read headers. Currently, no headers are implemented, so look for
@@ -140,40 +162,34 @@
                     break
 
             # Handle messages
-            handlers = {
-                'get': self.handle_get,
-                'report': self.handle_report,
-                'report-equiv': self.handle_equivreport,
-                'get-stream': self.handle_get_stream,
-                'get-stats': self.handle_get_stats,
-                'reset-stats': self.handle_reset_stats,
-            }
-
             while True:
                 d = await self.read_message()
                 if d is None:
                     break
-
-                for k in handlers.keys():
-                    if k in d:
-                        logger.debug('Handling %s' % k)
-                        if 'stream' in k:
-                            await handlers[k](d[k])
-                        else:
-                            with self.request_stats.start_sample() as self.request_sample, \
-                                    self.request_sample.measure():
-                                await handlers[k](d[k])
-                        break
-                else:
-                    logger.warning("Unrecognized command %r" % d)
-                    break
-
+                await self.dispatch_message(d)
                 await self.writer.drain()
+        except ClientError as e:
+            logger.error(str(e))
         finally:
             self.writer.close()
 
+    async def dispatch_message(self, msg):
+        for k in self.handlers.keys():
+            if k in msg:
+                logger.debug('Handling %s' % k)
+                if 'stream' in k:
+                    await self.handlers[k](msg[k])
+                else:
+                    with self.request_stats.start_sample() as self.request_sample, \
+                            self.request_sample.measure():
+                        await self.handlers[k](msg[k])
+                return
+
+        raise ClientError("Unrecognized command %r" % msg)
+
     def write_message(self, msg):
-        self.writer.write(('%s\n' % json.dumps(msg)).encode('utf-8'))
+        for c in chunkify(json.dumps(msg), self.max_chunk):
+            self.writer.write(c.encode('utf-8'))
 
     async def read_message(self):
         l = await self.reader.readline()
@@ -191,14 +207,38 @@
             logger.error('Bad message from client: %r' % message)
             raise e
 
+    async def handle_chunk(self, request):
+        lines = []
+        try:
+            while True:
+                l = await self.reader.readline()
+                l = l.rstrip(b"\n").decode("utf-8")
+                if not l:
+                    break
+                lines.append(l)
+
+            msg = json.loads(''.join(lines))
+        except (json.JSONDecodeError, UnicodeDecodeError) as e:
+            logger.error('Bad message from client: %r' % message)
+            raise e
+
+        if 'chunk-stream' in msg:
+            raise ClientError("Nested chunks are not allowed")
+
+        await self.dispatch_message(msg)
+
     async def handle_get(self, request):
         method = request['method']
         taskhash = request['taskhash']
 
-        row = self.query_equivalent(method, taskhash)
+        if request.get('all', False):
+            row = self.query_equivalent(method, taskhash, self.ALL_QUERY)
+        else:
+            row = self.query_equivalent(method, taskhash, self.FAST_QUERY)
+
         if row is not None:
             logger.debug('Found equivalent task %s -> %s', (row['taskhash'], row['unihash']))
-            d = {k: row[k] for k in ('taskhash', 'method', 'unihash')}
+            d = {k: row[k] for k in row.keys()}
 
             self.write_message(d)
         else:
@@ -228,7 +268,7 @@
 
                 (method, taskhash) = l.split()
                 #logger.debug('Looking up %s %s' % (method, taskhash))
-                row = self.query_equivalent(method, taskhash)
+                row = self.query_equivalent(method, taskhash, self.FAST_QUERY)
                 if row is not None:
                     msg = ('%s\n' % row['unihash']).encode('utf-8')
                     #logger.debug('Found equivalent task %s -> %s', (row['taskhash'], row['unihash']))
@@ -328,7 +368,7 @@
             # Fetch the unihash that will be reported for the taskhash. If the
             # unihash matches, it means this row was inserted (or the mapping
             # was already valid)
-            row = self.query_equivalent(data['method'], data['taskhash'])
+            row = self.query_equivalent(data['method'], data['taskhash'], self.FAST_QUERY)
 
             if row['unihash'] == data['unihash']:
                 logger.info('Adding taskhash equivalence for %s with unihash %s',
@@ -354,12 +394,11 @@
         self.request_stats.reset()
         self.write_message(d)
 
-    def query_equivalent(self, method, taskhash):
+    def query_equivalent(self, method, taskhash, query):
         # This is part of the inner loop and must be as fast as possible
         try:
             cursor = self.db.cursor()
-            cursor.execute('SELECT taskhash, method, unihash FROM tasks_v2 WHERE method=:method AND taskhash=:taskhash ORDER BY created ASC LIMIT 1',
-                           {'method': method, 'taskhash': taskhash})
+            cursor.execute(query, {'method': method, 'taskhash': taskhash})
             return cursor.fetchone()
         except:
             cursor.close()
diff --git a/poky/bitbake/lib/hashserv/tests.py b/poky/bitbake/lib/hashserv/tests.py
index a5472a9..6e86295 100644
--- a/poky/bitbake/lib/hashserv/tests.py
+++ b/poky/bitbake/lib/hashserv/tests.py
@@ -99,6 +99,29 @@
         result = self.client.get_unihash(self.METHOD, taskhash)
         self.assertEqual(result, unihash)
 
+    def test_huge_message(self):
+        # Simple test that hashes can be created
+        taskhash = 'c665584ee6817aa99edfc77a44dd853828279370'
+        outhash = '3c979c3db45c569f51ab7626a4651074be3a9d11a84b1db076f5b14f7d39db44'
+        unihash = '90e9bc1d1f094c51824adca7f8ea79a048d68824'
+
+        result = self.client.get_unihash(self.METHOD, taskhash)
+        self.assertIsNone(result, msg='Found unexpected task, %r' % result)
+
+        siginfo = "0" * (self.client.max_chunk * 4)
+
+        result = self.client.report_unihash(taskhash, self.METHOD, outhash, unihash, {
+            'outhash_siginfo': siginfo
+        })
+        self.assertEqual(result['unihash'], unihash, 'Server returned bad unihash')
+
+        result = self.client.get_taskhash(self.METHOD, taskhash, True)
+        self.assertEqual(result['taskhash'], taskhash)
+        self.assertEqual(result['unihash'], unihash)
+        self.assertEqual(result['method'], self.METHOD)
+        self.assertEqual(result['outhash'], outhash)
+        self.assertEqual(result['outhash_siginfo'], siginfo)
+
     def test_stress(self):
         def query_server(failures):
             client = Client(self.server.address)