
On Mon, Sep 8, 2025 at 2:31 PM Darryl Green <darryl.green@gmail.com> wrote:
The fact in a simple enough program/usage you can ensure it is impossible to get these when using sqlite (rather than it being simply broken to assume you wont get them in a shared access database) doesn't mean the library interface should make more advanced usage ugly and require use of a more or completely raw sqlite3 interface.
My own feeling is that an SQLite wrapper should have two levels of wrapping. One that sticks to SQLite's API almost 100%, with RAII types for handles, and classes that expose binding and getting value (and pushing results from custom functions or vtables) in a typesafe manner, with overloading where necessary. Supports std::string_view for text and std::span<std::byte> for blob. Expose explicitly static (by-ref) vs dynamic (i.e. copy) binding. NULL as a datatype. And another layer, on top, with higher level abstractions, implemented on the lower-level wrapping of SQLite's API. std::ref<T> binding means by-ref binding (no copy is SQLite, the client code knows the (large) memory lives at least until the _step() call). std::optional<T> when getting a result, instead of throwing on NULL as a guard-rail / hand-holding. Etc... And the even higher-level magic of getting rows directly into structs within foreach loops. The first layer should be usable on its own, be verbose and explicit like SQLite is. But basically have zero to almost no overhead. Should basically disappear once compiled. I looked at the vtable example, and saw it returns boost::variant2. Which means an extra copy compared to using [sqlite3_result_blob][1] directly. And hide the destructor object you can give SQLite to manage the lifetime of what you give it. (please correct me if I'm wrong). I'm not saying the approach using variant2 is wrong, for some people. But it's not zero overhead, and IMHO a Boost-quality wrapper should be very overhead for SQLite, w/o having to resort back to pure C and the official API. In my vision of the dual-level wrapping, many types don't control lifetime. They're temporaries over raw handles to add safety and convenient. Regarding the examples, I'd want more useful things. I.e. for table-valued functions, a better [CArray][2] for C++ That works around the lack of arrays in SQLite. You can't do SELECT ... WHERE col = ANY($1) like you can in PostgreSQL libpq, biding a single value that's an array (of PKs for example). I.e. put the use of the table-valued function over a C++ contain in a useful SQL context, that's very common to people resorting to WHERE col in (?, ?, ?, ....) Instead of integrating with Boost.JSON, which is only a tiny convenient to hide the serialization / deserialization, write a competitor to SQLite's own built-in [JSON1][3] to demonstrate Boost.JSON is faster than SQLite's own JSON parser (or is it???), while at the same time providing excellent examples to eponymous tables. An example using Boost.Multi-Index that automatically is able to implement best_index() based on the meta-programming and the indexes exposed by the BMI, kinda like want we did years ago but crudely (Steven helped with the meta-programming at the time I recall). It knows whether the index is UNIQUE or NOT, whether it's hash-based or ordered, and based on that, synthetize or at least help synthetize it. I.e. advantage magic stuff that's almost a 3rd level of wrapping, bridging SQLite and Multi-Index. And then there are things around SQLite that go beyond wrapping the SQLite API. For example, I wanted access to header values that there are no pragmas for. But instead of redoing the parsing myself, I used tricks as below, which I suspect few people know about: application_id_ = db_.executeInt("PRAGMA application_id"); user_version_ = db_.executeInt("PRAGMA user_version"); page_size_ = db_.executeInt("PRAGMA page_size"); sqlite3_file* db_fp = nullptr; if (SQLITE_OK == sqlite3_file_control(db_._handle, nullptr, SQLITE_FCNTL_FILE_POINTER, &db_fp)) { if (const sqlite3_io_methods* io = db_fp->pMethods) { int val = 0; if (SQLITE_OK == io->xRead(db_fp, &val, 4, 24)) { change_counter_ = boost::endian::big_to_native(val); } if (SQLITE_OK == io->xRead(db_fp, &val, 4, 96)) { sqlite_version_ = boost::endian::big_to_native(val); } } else { // in-memory DB change_counter_ = 0; // TODO sqlite_version_ = SQLITE_VERSION_NUMBER; } } Or I needed to be able to open an SQLite DB that's truncated, because I partially read only the first 64KB of the DB from a TAR, So I could tell whether the large DBs inside that TAR are up-to-date or not, w/o untarring GBs of data. Which requires a sqlite3_db_config trick. bool DbInfo::isSQLiteDB( const fs::path& path, AllowTruncated allow_truncated, int* p_user_version ) { assert(fs::is_regular_file(path)); auto filename = path.string(); sqlite3* db = nullptr; int errc = sqlite3_open_v2(filename.c_str(), &db, SQLITE_OPEN_READONLY, nullptr); if (errc != SQLITE_OK) { sqlite3_close(db); // must close even on error return false; } if (bool(allow_truncated)) { // see https://sqlite.org/forum/forumpost/324ee8bd52 int new_val = 1; int actual_val = 0; sqlite3_db_config(db, SQLITE_DBCONFIG_WRITABLE_SCHEMA, new_val, &actual_val); //sqlite3_db_config(db, SQLITE_DBCONFIG_TRUNCATED_SCHEMA, 1); } sqlite3_stmt* stmt = nullptr; const char* sql_end = nullptr; constexpr char sql[] = "pragma user_version"; errc = sqlite3_prepare_v2(db, sql, sizeof(sql) - 1, &stmt, &sql_end); if ((errc == SQLITE_OK) && stmt) { assert((sql_end - sql) == sizeof(sql) - 1); if (SQLITE_ROW != sqlite3_step(stmt)) { errc = SQLITE_NOTADB; } else if (p_user_version) { *p_user_version = sqlite3_column_int(stmt, 0); } sqlite3_finalize(stmt); } else { errc = SQLITE_NOTADB; } sqlite3_close(db); return errc == SQLITE_OK; } That's the SQLite wrapper I have in mind. Different layers of increasing complexity, and higher level of abstractions. Collections of tricks and utilities around SQLite DBs. And possibly even competing with SQLite, on generating DBs directly, w/o going through the SQLite API, the SQL, bytecode VM, the Btree and page cache, but just the [file format][4], when you know in advance the cardinality and control the ordering, to precompute fan-out of the intermediate Btree pages, overflow pages, and can thus "stream" the database, possibly concurrently on several table, by reserving blocks of pages, and patching the root table in sqlite_master after the fact. That's my vision. But it's just that, a vision. --DD [1]: https://sqlite.org/c3ref/result_blob.html [2]: https://www.sqlite.org/carray.html [3] https://www.sqlite.org/json1.html [4] https://www.sqlite.org/fileformat.html