commit | 1d1d7784f7858880f4dd732fd287517daf1e1785 | [log] [tgz] |
---|---|---|
author | Ed Tanous <ed@tanous.net> | Tue Apr 09 12:54:08 2024 -0700 |
committer | Ed Tanous <ed@tanous.net> | Wed Apr 24 18:52:39 2024 +0000 |
tree | 874b3a7cb049f007b1c1fa4d28ad8f44e5e2539e | |
parent | 4ba5be51e3fcbeed49a6a312b4e6b2f1ea7447ba [diff] |
Fix large content error codes When pushing multi-part payloads, it's quite helpful if the server supports the header field of "Expect: 100-Continue". What this does, is on a large file push, allows the server to possibly reject a request before the payload is actually sent, thereby saving bandwidth, and giving the user more information. Bmcweb, since commit 3909dc82a003893812f598434d6c4558107afa28 by James (merged July 2020) has simply closed the connection if a user attempts to send too much data, thereby making the bmcweb implementation simpler. Unfortunately, to a security tester, this has the appearance on the network as a crash, which will likely then get filed as a "verify this isn't failing" bug. In addition, the default args on curl multipart upload enable the Expect: 100-Continue behavior, so folks testing must've just been disabling that behavior. Bmcweb should just support the right thing here. Unfortunately, closing a connection uncleanly is easy. Closing a connection cleanly is difficult. This requires a pretty large refactor of the http connection class to accomplish. Tested: Create files of various size and try to send them (Note, default body limit is 30 MB) and upload them with an without a username. ``` dd if=/dev/zero of=zeros-file bs=1048576 count=16 of=16mb.txt curl -k --location POST https://192.168.7.2/redfish/v1/UpdateService/update -F 'UpdateParameters={"Targets":["/redfish/v1/Managers/bmc"]} ;type=application/json' -F UpdateFile=@32mb.txt -v ``` No Username: 32MB returns < HTTP/1.1 413 Payload Too Large 16MB returns < HTTP/1.1 401 Unauthorized With Username 32MB returns < HTTP/1.1 413 Payload Too Large 16MB returns < HTTP/1.1 400 Bad Request Note, in all cases except the last one, the payload is never sent from curl. Redfish protocol validator fails no new tests (SSE failure still present). Redfish service validator passes. Change-Id: I72bc8bbc49a05555c31dc7209292f846ec411d43 Signed-off-by: Ed Tanous <ed@tanous.net>
This component attempts to be a "do everything" embedded webserver for OpenBMC.
The webserver implements a few distinct interfaces:
bmcweb at a protocol level supports http and https. TLS is supported through OpenSSL.
Bmcweb supports multiple authentication protocols:
Each of these types of authentication is able to be enabled or disabled both via runtime policy changes (through the relevant Redfish APIs) or via configure time options. All authentication mechanisms supporting username/password are routed to libpam, to allow for customization in authentication implementations.
All authorization in bmcweb is determined at routing time, and per route, and conform to the Redfish PrivilegeRegistry.
*Note: Non-Redfish functions are mapped to the closest equivalent Redfish privilege level.
bmcweb is configured per the meson build files. Available options are documented in meson_options.txt
meson setup builddir ninja -C builddir
If any of the dependencies are not found on the host system during configuration, meson will automatically download them via its wrap dependencies mentioned in bmcweb/subprojects
.
bmcweb relies on some on-system data for storage of persistent data that is internal to the process. Details on the exact data stored and when it is read/written can seen from the persistent_data
namespace.
When SSL support is enabled and a usable certificate is not found, bmcweb will generate a self-signed a certificate before launching the server. Please see the bmcweb source code for details on the parameters this certificate is built with.
bmcweb is capable of aggregating resources from satellite BMCs. Refer to AGGREGATION.md for more information on how to enable and use this feature.