aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2026-02-25The 7th batchJunio C Hamano
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-25Merge branch 'ac/string-list-sort-u-and-tests'Junio C Hamano
Code clean-up using a new helper function introduced lately. * ac/string-list-sort-u-and-tests: sparse-checkout: use string_list_sort_u
2026-02-25Merge branch 'mc/tr2-process-ancestry-cleanup'Junio C Hamano
Add process ancestry data to trace2 on macOS to match what we already do on Linux and Windows. Also adjust the way Windows implementation reports this information to match the other two. * mc/tr2-process-ancestry-cleanup: t0213: add trace2 cmd_ancestry tests test-tool: extend trace2 helper with 400ancestry trace2: emit cmd_ancestry data for Windows trace2: refactor Windows process ancestry trace2 event build: include procinfo.c impl for macOS trace2: add macOS process ancestry tracing
2026-02-25Merge branch 'ps/pack-concat-wo-backfill'Junio C Hamano
"git pack-objects --stdin-packs" with "--exclude-promisor-objects" fetched objects that are promised, which was not wanted. This has been fixed. * ps/pack-concat-wo-backfill: builtin/pack-objects: don't fetch objects when merging packs
2026-02-25Merge branch 'dk/complete-stash-import-export'Junio C Hamano
Command line completion (in contrib/) update. * dk/complete-stash-import-export: completion: add stash import, export
2026-02-25Merge branch 'jc/doc-cg-needswork'Junio C Hamano
A CodingGuidelines update. * jc/doc-cg-needswork: CodingGuidelines: document NEEDSWORK comments
2026-02-25Merge branch 'ds/revision-maximal-only'Junio C Hamano
"git rev-list" and friends learn "--maximal-only" to show only the commits that are not reachable by other commits. * ds/revision-maximal-only: revision: add --maximal-only option
2026-02-25Merge branch 'cc/lop-filter-auto'Junio C Hamano
"auto filter" logic for large-object promisor remote. * cc/lop-filter-auto: fetch-pack: wire up and enable auto filter logic promisor-remote: change promisor_remote_reply()'s signature promisor-remote: keep advertised filters in memory list-objects-filter-options: support 'auto' mode for --filter doc: fetch: document `--filter=<filter-spec>` option fetch: make filter_options local to cmd_fetch() clone: make filter_options local to cmd_clone() promisor-remote: allow a client to store fields promisor-remote: refactor initialising field lists
2026-02-25Merge branch 'pw/commit-msg-sample-hook'Junio C Hamano
Update sample commit-msg hook to complain when a log message has material mailinfo considers the end of log message in the middle. * pw/commit-msg-sample-hook: templates: detect commit messages containing diffs templates: add .gitattributes entry for sample hooks
2026-02-25Merge branch 'kh/doc-am-format-sendmail'Junio C Hamano
Doc update. * kh/doc-am-format-sendmail: doc: add caveat about round-tripping format-patch
2026-02-25Documentation/git-repo: capitalize format descriptionsLucas Seiki Oshiro
The descriptions for the git-repo output formats are in lowercase. Capitalize these descriptions, making them consistent with the rest of the documentation. Signed-off-by: Lucas Seiki Oshiro <lucasseikioshiro@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-25Documentation/git-repo: replace 'NUL' with '_NUL_'Lucas Seiki Oshiro
Replace all occurrences of "NUL" by "_NUL_" in git-repo.adoc, following the convention used by other documentation files. Signed-off-by: Lucas Seiki Oshiro <lucasseikioshiro@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-25t1901: adjust nul format output instead of expected valueLucas Seiki Oshiro
The test 'keyvalue and nul format', as it description says, test both `keyvalue` and `nul` format. These formats are similar, differing only in their field separator (= in the former, LF in the latter) and their record separator (LF in the former, NUL in the latter). This way, both formats can be tested using the same expected output and only replacing the separators in one of the output formats. However, it is not desirable to have a NUL character in the files compared by test_cmp because, if that assetion fails, diff will consider them binary files and won't display the differences properly. Adjust the output of `git repo structure --format=nul` in t1901, matching the --format=keyvalue ones. Compare this output against the same value expected from --format=keyvalue, without using files with NUL characters in test_cmp. Signed-off-by: Lucas Seiki Oshiro <lucasseikioshiro@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-25t1900: rename t1900-repo to t1900-repo-infoLucas Seiki Oshiro
Since the commit bbb2b93348 (builtin/repo: introduce structure subcommand, 2025-10-21), t1901 specifically tests git-repo-structure. Rename t1900-repo to t1900-repo-info to clarify that it focus solely on git-repo-info subcommand. Signed-off-by: Lucas Seiki Oshiro <lucasseikioshiro@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-25repo: rename struct field to repo_info_fieldLucas Seiki Oshiro
Change the name of the struct field to repo_info_field, making it explicit that it is an internal data type of git-repo-info. Signed-off-by: Lucas Seiki Oshiro <lucasseikioshiro@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-25repo: replace get_value_fn_for_key by get_repo_info_fieldLucas Seiki Oshiro
Remove the function `get_value_fn_for_key`, which returns a function that retrieves a value for a certain repo info key. Introduce `get_repo_info_field` instead, which returns a struct field. This refactor makes the structure of the function print_fields more consistent to the function print_all_fields, improving its readability. Signed-off-by: Lucas Seiki Oshiro <lucasseikioshiro@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-25repo: rename repo_info_fields to repo_info_fieldLucas Seiki Oshiro
Rename repo_info_fields as repo_info_field, following the CodingGuidelines rule for naming arrays in singular. Rename all the references to that array accordingly. Signed-off-by: Lucas Seiki Oshiro <lucasseikioshiro@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-25CodingGuidelines: instruct to name arrays in singularLucas Seiki Oshiro
Arrays should be named in the singular form, ensuring that when accessing an element within an array (e.g. dog[0]) it's clear that we're referring to an element instead of a collection. Add a new rule to CodingGuidelines asking for arrays to be named in singular instead of plural. Helped-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Lucas Seiki Oshiro <lucasseikioshiro@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-25refs: add GIT_REFERENCE_BACKEND to specify reference backendKarthik Nayak
Git allows setting a different object directory via 'GIT_OBJECT_DIRECTORY', but provides no equivalent for references. In the previous commit we extended the 'extensions.refStorage' config to also support an URI input for reference backend with location. Let's also add a new environment variable 'GIT_REFERENCE_BACKEND' that takes in the same input as the config variable. Having an environment variable allows us to modify the reference backend and location on the fly for individual Git commands. The environment variable also allows usage of alternate reference directories during 'git-clone(1)' and 'git-init(1)'. Add the config to the repository when created with the environment variable set. When initializing the repository with an alternate reference folder, create the required stubs in the repositories $GIT_DIR. The inverse, i.e. removal of the ref store doesn't clean up the stubs in the $GIT_DIR since that would render it unusable. Removal of ref store is only used when migrating between ref formats and cleanup of the $GIT_DIR doesn't make sense in such a situation. Helped-by: Jean-Noël Avila <jn.avila@free.fr> Signed-off-by: Karthik Nayak <karthik.188@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-25refs: allow reference location in refstorage configKarthik Nayak
The 'extensions.refStorage' config is used to specify the reference backend for a given repository. Both the 'files' and 'reftable' backends utilize the $GIT_DIR as the reference folder by default in `get_main_ref_store()`. Since the reference backends are pluggable, this means that they could work with out-of-tree reference directories too. Extend the 'refStorage' config to also support taking an URI input, where users can specify the reference backend and the location. Add the required changes to obtain and propagate this value to the individual backends. Add the necessary documentation and tests. Traditionally, for linked worktrees, references were stored in the '$GIT_DIR/worktrees/<wt_id>' path. But when using an alternate reference storage path, it doesn't make sense to store the main worktree references in the new path, and the linked worktree references in the $GIT_DIR. So, let's store linked worktree references in '$ALTERNATE_REFERENCE_DIR/worktrees/<wt_id>'. To do this, create the necessary files and folders while also adding stubs in the $GIT_DIR path to ensure that it is still considered a Git directory. Ideally, we would want to pass in a `struct worktree *` to individual backends, instead of passing the `gitdir`. This allows them to handle worktree specific logic. Currently, that is not possible since the worktree code is: - Tied to using the global `the_repository` variable. - Is not setup before the reference database during initialization of the repository. Add a TODO in 'refs.c' to ensure we can eventually make that change. Helped-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Karthik Nayak <karthik.188@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-25refs: receive and use the reference storage payloadKarthik Nayak
An upcoming commit will add support for providing an URI via the 'extensions.refStorage' config. The URI will contain the reference backend and a corresponding payload. The payload can be then used for providing an alternate locations for the reference backend. To prepare for this, modify the existing backends to accept such an argument when initializing via the 'init()' function. Both the files and reftable backends will parse the information to be filesystem paths to store references. Given that no callers pass any payload yet this is essentially a no-op change for now. To enable this, provide a 'refs_compute_filesystem_location()' function which will parse the current 'gitdir' and the 'payload' to provide the final reference directory and common reference directory (if working in a linked worktree). The documentation and tests will be added alongside the extension of the config variable. Helped-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Karthik Nayak <karthik.188@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-25refs: move out stub modification to generic layerKarthik Nayak
When creating the reftable reference backend on disk, we create stubs to ensure that the directory can be recognized as a Git repository. This is done by calling `refs_create_refdir_stubs()`. Move this to the generic layer as this is needed for all backends excluding from the files backends. In an upcoming commit where we introduce alternate reference backend locations, we'll have to also create stubs in the $GIT_DIR irrespective of the backend being used. This commit builds the base to add that logic. Similarly, move the logic for deletion of stubs to the generic layer. The files backend recursively calls the remove function of the 'packed-backend', here skip calling the generic function since that would try to delete stubs. Signed-off-by: Karthik Nayak <karthik.188@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-25refs: extract out `refs_create_refdir_stubs()`Karthik Nayak
For Git to recognize a directory as a Git directory, it requires the directory to contain: 1. 'HEAD' file 2. 'objects/' directory 3. 'refs/' directory Here, #1 and #3 are part of the reference storage mechanism, specifically the files backend. Since then, newer backends such as the reftable backend have moved to using their own path ('reftable/') for storing references. But to ensure Git still recognizes the directory as a Git directory, we create stubs. There are two locations where we create stubs: - In 'refs/reftable-backend.c' when creating the reftable backend. - In 'clone.c' before spawning transport helpers. In a following commit, we'll add another instance. So instead of repeating the code, let's extract out this code to `refs_create_refdir_stubs()` and use it. Signed-off-by: Karthik Nayak <karthik.188@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-25setup: don't modify repo in `create_reference_database()`Karthik Nayak
The `create_reference_database()` function is used to create the reference database during initialization of a repository. The function calls `repo_set_ref_storage_format()` to set the repositories reference format. This is an unexpected side-effect of the function. More so because the function is only called in two locations: 1. During git-init(1) where the value is propagated from the `struct repository_format repo_fmt` value. 2. During git-clone(1) where the value is propagated from the `the_repository` value. The former is valid, however the flow already calls `repo_set_ref_storage_format()`, so this effort is simply duplicated. The latter sets the existing value in `the_repository` back to itself. While this is okay for now, introduction of more fields in `repo_set_ref_storage_format()` would cause issues, especially dynamically allocated strings, where we would free/allocate the same string back into `the_repostiory`. To avoid all this confusion, clean up the function to no longer take in and set the repo's reference storage format. Signed-off-by: Karthik Nayak <karthik.188@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-25fetch: fix wrong evaluation order in URL trailing-slash trimmingcuiweixie
if i == -1, url[i] will be UB. Signed-off-by: cuiweixie <cuiweixie@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24midx: enable reachability bitmaps during MIDX compactionTaylor Blau
Enable callers to generate reachability bitmaps when performing MIDX layer compaction by combining all existing bitmaps from the compacted layers. Note that because of the object/pack ordering described by the previous commit, the pseudo-pack order for the compacted MIDX is the same as concatenating the individual pseudo-pack orderings for each layer in the compaction range. As a result, the only non-test or documentation change necessary is to treat all objects as non-preferred during compaction so as not to disturb the object ordering. In the future, we may want to adjust which commit(s) receive reachability bitmaps when compacting multiple .bitmap files into one, or even generate new bitmaps (e.g., if the references have moved significantly since the .bitmap was generated). This commit only implements combining all existing bitmaps in range together in order to demonstrate and lay the groundwork for more exotic strategies. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24midx: implement MIDX compactionTaylor Blau
When managing a MIDX chain with many layers, it is convenient to combine a sequence of adjacent layers into a single layer to prevent the chain from growing too long. While it is conceptually possible to "compact" a sequence of MIDX layers together by running "git multi-pack-index write --stdin-packs", there are a few drawbacks that make this less than desirable: - Preserving the MIDX chain is impossible, since there is no way to write a MIDX layer that contains objects or packs found in an earlier MIDX layer already part of the chain. So callers would have to write an entirely new (non-incremental) MIDX containing only the compacted layers, discarding all other objects/packs from the MIDX. - There is (currently) no way to write a MIDX layer outside of the MIDX chain to work around the above, such that the MIDX chain could be reassembled substituting the compacted layers with the MIDX that was written. - The `--stdin-packs` command-line option does not allow us to specify the order of packs as they appear in the MIDX. Therefore, even if there were workarounds for the previous two challenges, any bitmaps belonging to layers which come after the compacted layer(s) would no longer be valid. This commit introduces a way to compact a sequence of adjacent MIDX layers into a single layer while preserving the MIDX chain, as well as any bitmap(s) in layers which are newer than the compacted ones. Implementing MIDX compaction does not require a significant number of changes to how MIDX layers are written. The main changes are as follows: - Instead of calling `fill_packs_from_midx()`, we call a new function `fill_packs_from_midx_range()`, which walks backwards along the portion of the MIDX chain which we are compacting, and adds packs one layer a time. In order to preserve the pseudo-pack order, the concatenated pack order is preserved, with the exception of preferred packs which are always added first. - After adding entries from the set of packs in the compaction range, `compute_sorted_entries()` must adjust the `pack_int_id`'s for all objects added in each fanout layer to match their original `pack_int_id`'s (as opposed to the index at which each pack appears in `ctx.info`). Note that we cannot reuse `midx_fanout_add_midx_fanout()` directly here, as it unconditionally recurs through the `->base_midx`. Factor out a `_1()` variant that operates on a single layer, reimplement the existing function in terms of it, and use the new variant from `midx_fanout_add_compact()`. Since we are sorting the list of objects ourselves, the order we add them in does not matter. - When writing out the new 'multi-pack-index-chain' file, discard any layers in the compaction range, replacing them with the newly written layer, instead of keeping them and placing the new layer at the end of the chain. This ends up being sufficient to implement MIDX compaction in such a way that preserves bitmaps corresponding to more recent layers in the MIDX chain. The tests for MIDX compaction are so far fairly spartan, since the main interesting behavior here is ensuring that the right packs/objects are selected from each layer, and that the pack order is preserved despite whether or not they are sorted in lexicographic order in the original MIDX chain. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24t/helper/test-read-midx.c: plug memory leak when selecting layerTaylor Blau
Though our 'read-midx' test tool is capable of printing information about a single MIDX layer identified by its checksum, no caller in our test suite exercises this path. Unfortunately, there is a memory leak lurking in this (currently) unused path that would otherwise be exposed by the following commit. This occurs when providing a MIDX layer checksum other than the tip. As we walk over the MIDX chain trying to find the matching layer, we drop our reference to the top-most MIDX layer. Thus, our call to 'close_midx()' later on leaks memory between the top-most MIDX layer and the MIDX layer immediately following the specified one. Plug this leak by holding a reference to the tip of the MIDX chain, and ensure that we call `close_midx()` before terminating the test tool. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24midx-write.c: factor fanout layering from `compute_sorted_entries()`Taylor Blau
When computing the set of objects to appear in a MIDX, we use compute_sorted_entries(), which handles objects from various existing sources one fanout layer at a time. The process for computing this set is slightly different during MIDX compaction, so factor out the existing functionality into its own routine to prevent `compute_sorted_entries()` from becoming too difficult to read. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24midx-write.c: enumerate `pack_int_id` values directlyTaylor Blau
Our `midx-write.c::fill_packs_from_midx()` function currently enumerates the range [0, m->num_packs), and then shifts its index variable up by `m->num_packs_in_base` to produce a valid `pack_int_id`. Instead, directly enumerate the range: [m->num_packs_in_base, m->num_packs_in_base + m->num_packs) , which are the original pack_int_ids themselves as opposed to the indexes of those packs relative to the MIDX layer they are contained within. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24midx-write.c: extract `fill_pack_from_midx()`Taylor Blau
When filling packs from an existing MIDX, `fill_packs_from_midx()` handles preparing a MIDX'd pack, and reading out its pack name from the existing MIDX. MIDX compaction will want to perform an identical operation, though the caller will look quite different than `fill_packs_from_midx()`. To reduce any future code duplication, extract `fill_pack_from_midx()` from `fill_packs_from_midx()` to prepare to call our new helper function in a future change. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24midx-write.c: introduce `midx_pack_perm()` helperTaylor Blau
The `ctx->pack_perm` array can be considered as a permutation between the original `pack_int_id` of some given pack to its position in the `ctx->info` array containing all packs. Today we can always index into this array with any known `pack_int_id`, since there is never a `pack_int_id` which is greater than or equal to the value `ctx->nr`. That is not necessarily the case with MIDX compaction. For example, suppose we have a MIDX chain with three layers, each containing three packs. The base of the MIDX chain will have packs with IDs 0, 1, and 2, the next layer 3, 4, and 5, and so on. If we are compacting the topmost two layers, we'll have input `pack_int_id` values between [3, 8], but `ctx->nr` will only be 6. In that example, if we want to know where the pack whose original `pack_int_id` value was, say, 7, we would compute `ctx->pack_perm[7]`, leading to an uninitialized read, since there are only 6 entries allocated in that array. To address this, there are a couple of options: - We could allocate enough entries in `ctx->pack_perm` to accommodate the largest `orig_pack_int_id` value. - Or, we could internally shift the input values by the number of packs in the base layer of the lower end of the MIDX compaction range. This patch prepare us to take the latter approach, since it does not allocate more memory than strictly necessary. (In our above example, the base of the lower end of the compaction range is the first MIDX layer (having three packs), so we would end up indexing `ctx->pack_perm[7-3]`, which is a valid read.) Note that this patch does not actually implement that approach yet, but merely performs a behavior-preserving refactoring which will make the change easier to carry out in the future. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24midx: do not require packs to be sorted in lexicographic orderTaylor Blau
The MIDX file format currently requires that pack files be identified by the lexicographic ordering of their names (that is, a pack having a checksum beginning with "abc" would have a numeric pack_int_id which is smaller than the same value for a pack beginning with "bcd"). As a result, it is impossible to combine adjacent MIDX layers together without permuting bits from bitmaps that are in more recent layer(s). To see why, consider the following example: | packs | preferred pack --------+-------------+--------------- MIDX #0 | { X, Y, Z } | Y MIDX #1 | { A, B, C } | B MIDX #2 | { D, E, F } | D , where MIDX #2's base MIDX is MIDX #1, and so on. Suppose that we want to combine MIDX layers #0 and #1, to create a new layer #0' containing the packs from both layers. With the original three MIDX layers, objects are laid out in the bitmap in the order they appear in their source pack, and the packs themselves are arranged according to the pseudo-pack order. In this case, that ordering is Y, X, Z, B, A, C. But recall that the pseudo-pack ordering is defined by the order that packs appear in the MIDX, with the exception of the preferred pack, which sorts ahead of all other packs regardless of its position within the MIDX. In the above example, that means that pack 'Y' could be placed anywhere (so long as it is designated as preferred), however, all other packs must be placed in the location listed above. Because that ordering isn't sorted lexicographically, it is impossible to compact MIDX layers in the above configuration without permuting the object-to-bit-position mapping. Changing this mapping would affect all bitmaps belonging to newer layers, rendering the bitmaps associated with MIDX #2 unreadable. One of the goals of MIDX compaction is that we are able to shrink the length of the MIDX chain *without* invalidating bitmaps that belong to newer layers, and the lexicographic ordering constraint is at odds with this goal. However, packs do not *need* to be lexicographically ordered within the MIDX. As far as I can gather, the only reason they are sorted lexically is to make it possible to perform a binary search over the pack names in a MIDX, necessary to make `midx_contains_pack()`'s performance logarithmic in the number of packs rather than linear. Relax this constraint by allowing MIDX writes to proceed with packs that are not arranged in lexicographic order. `midx_contains_pack()` will lazily instantiate a `pack_names_sorted` array on the MIDX, which will be used to implement the binary search over pack names. This change produces MIDXs which may not be correctly read with external tools or older versions of Git. Though older versions of Git know how to gracefully degrade and ignore any MIDX(s) they consider corrupt, external tools may not be as robust. To avoid unintentionally breaking any such tools, guard this change behind a version bump in the MIDX's on-disk format. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24midx-write.c: introduce `struct write_midx_opts`Taylor Blau
In the MIDX writing code, there are four functions which perform some sort of MIDX write operation. They are: - write_midx_file() - write_midx_file_only() - expire_midx_packs() - midx_repack() All of these functions are thin wrappers over `write_midx_internal()`, which implements the bulk of these routines. As a result, the `write_midx_internal()` function takes six arguments. Future commits in this series will want to add additional arguments, and in general this function's signature will be the union of parameters among *all* possible ways to write a MIDX. Instead of adding yet more arguments to this function to support MIDX compaction, introduce a `struct write_midx_opts`, which has the same struct members as `write_midx_internal()`'s arguments. Adding additional fields to the `write_midx_opts` struct is preferable to adding additional arguments to `write_midx_internal()`. This is because the callers below all zero-initialize the struct, so each time we add a new piece of information, we do not have to pass the zero value for it in all other call-sites that do not care about it. For now, no functional changes are included in this patch. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24midx-write.c: don't use `pack_perm` when assigning `bitmap_pos`Taylor Blau
In midx_pack_order(), we compute for each bitmapped pack the first bit to correspond to an object in that pack, along with how many bits were assigned to object(s) in that pack. Initially, each bitmap_nr value is set to zero, and each bitmap_pos value is set to the sentinel BITMAP_POS_UNKNOWN. This is done to ensure that there are no packs who have an unknown bit position but a somehow non-zero number of objects (cf. `write_midx_bitmapped_packs()` in midx-write.c). Once the pack order is fully determined, midx_pack_order() sets the bitmap_pos field for any bitmapped packs to zero if they are still listed as BITMAP_POS_UNKNOWN. However, we enumerate the bitmapped packs in order of `ctx->pack_perm`. This is fine for existing cases, since the only time the `ctx->pack_perm` array holds a value outside of the addressable range of `ctx->info` is when there are expired packs, which only occurs via 'git multi-pack-index expire', which does not support writing MIDX bitmaps. As a result, the range of ctx->pack_perm covers all values in [0, `ctx->nr`), so enumerating in this order isn't an issue. A future change necessary for compaction will complicate this further by introducing a wrapper around the `ctx->pack_perm` array, which turns the given `pack_int_id` into one that is relative to the lower end of the compaction range. As a result, indexing into `ctx->pack_perm` through this helper, say, with "0" will produce a crash when the lower end of the compaction range has >0 pack(s) in its base layer, since the subtraction will wrap around the 32-bit unsigned range, resulting in an uninitialized read. But the process is completely unnecessary in the first place: we are enumerating all values of `ctx->info`, and there is no reason to process them in a different order than they appear in memory. Index `ctx->info` directly to reflect that. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24t/t5319-multi-pack-index.sh: fix copy-and-paste error in t5319.39Taylor Blau
Commit d4bf1d88b90 (multi-pack-index: verify missing pack, 2018-09-13) adds a new test to the MIDX test script to test how we handle missing packs. While the commit itself describes the test as "verify missing pack[s]", the test itself is actually called "verify packnames out of order", despite that not being what it tests. Likely this was a copy-and-paste of the test immediately above it of the same name. Correct this by renaming the test to match the commit message. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24git-multi-pack-index(1): align SYNOPSIS with 'git multi-pack-index -h'Taylor Blau
Since c39fffc1c90 (tests: start asserting that *.txt SYNOPSIS matches -h output, 2022-10-13), the manual page for 'git multi-pack-index' has a SYNOPSIS section which differs from 'git multi-pack-index -h'. Correct this while also documenting additional options accepted by the 'write' sub-command. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24git-multi-pack-index(1): remove non-existent incompatibilityTaylor Blau
Since fcb2205b774 (midx: implement support for writing incremental MIDX chains, 2024-08-06), the command-line options '--incremental' and '--bitmap' were declared to be incompatible with one another when running 'git multi-pack-index write'. However, since 27afc272c49 (midx: implement writing incremental MIDX bitmaps, 2025-03-20), that incompatibility no longer exists, despite the documentation saying so. Correct this by removing the stale reference to their incompatibility. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24builtin/multi-pack-index.c: make '--progress' a common optionTaylor Blau
All multi-pack-index sub-commands (write, verify, repack, and expire) support a '--progress' command-line option, despite not listing it as one of the common options in `common_opts`. As a result each sub-command declares its own `OPT_BIT()` for a "--progress" command-line option. Centralize this within the `common_opts` to avoid re-declaring it in each sub-command. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24midx: introduce `midx_get_checksum_hex()`Taylor Blau
When trying to print out, say, the hexadecimal representation of a MIDX's hash, our code will do something like: hash_to_hex_algop(midx_get_checksum_hash(m), m->source->odb->repo->hash_algo); , which is both cumbersome and repetitive. In fact, all but a handful of callers to `midx_get_checksum_hash()` do exactly the above. Reduce the repetitive nature of calling `midx_get_checksum_hash()` by having it return a pointer into a static buffer containing the above result. For the handful of callers that do need to compare the raw bytes and don't want to deal with an encoded copy (e.g., because they are passing it to hasheq() or similar), they may still rely on `midx_get_checksum_hash()` which returns the raw bytes. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24midx: rename `get_midx_checksum()` to `midx_get_checksum_hash()`Taylor Blau
Since 541204aabea (Documentation: document naming schema for structs and their functions, 2024-07-30), we have adopted a naming convention for functions that would prefer a name like, say, `midx_get_checksum()` over `get_midx_checksum()`. Adopt this convention throughout the midx.h API. Since this function returns a raw (that is, non-hex encoded) hash, let's suffix the function with "_hash()" to make this clear. As a side effect, this prepares us for the subsequent change which will introduce a "_hex()" variant that encodes the checksum itself. Suggested-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24midx: mark `get_midx_checksum()` arguments as constTaylor Blau
To make clear that the function `get_midx_checksum()` does not do anything to modify its argument, mark the MIDX pointer as const. The following commit will rename this function altogether to make clear that it returns the raw bytes of the checksum, not a hex-encoded copy of it. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24build: regenerate config-list.h when Documentation changesD. Ben Knoble
The Meson-based build doesn't know when to rebuild config-list.h, so the header is sometimes stale. For example, an old build directory might have config-list.h from before 4173df5187 (submodule: introduce extensions.submodulePathConfig, 2026-01-12), which added submodule.<name>.gitdir to the list. Without it, t9902-completion.sh fails. Regenerating the config-list.h artifact from sources fixes the artifact and the test. Since Meson does not have (or want) builtin support for globbing like Make, teach generate-configlist.sh to also generate a list of Documentation files its output depends on, and incorporate that into the Meson build. We honor the undocumented GCC/Clang contract of outputting empty targets for all the dependencies (like they do with -MP). That is, generate lines like build/config-list.h: $SOURCE_DIR/Documentation/config.adoc $SOURCE_DIR/Documentation/config.adoc: We assume that if a user adds a new file under Documentation/config then they will also edit one of the existing files to include that new file, and that will trigger a rebuild. Also mark the generator script as a dependency. While we're at it, teach the Makefile to use the same "the script knows it's dependencies" logic. For Meson, combining the following commands helps debug dependencies: ninja -C <builddir> -t deps config-list.h ninja -C <builddir> -t browse config-list.h The former lists all the dependencies discovered from our output ".d" file (the config documentation) and the latter shows the dependency on the script itself, among other useful edges in the dependency graph. Helped-by: Patrick Steinhardt <ps@pks.im> Helped-by: Phillip Wood <phillip.wood@dunelm.org.uk> Signed-off-by: D. Ben Knoble <ben.knoble+github@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24Merge branch 'jh/alias-i18n' into jh/alias-i18n-fixesJunio C Hamano
* jh/alias-i18n: completion: fix zsh alias listing for subsection aliases alias: support non-alphanumeric names via subsection syntax alias: prepare for subsection aliases help: use list_aliases() for alias listing
2026-02-24builtin/maintenance: use "geometric" strategy by defaultPatrick Steinhardt
The git-gc(1) command has been introduced in the early days of Git in 30f610b7b0 (Create 'git gc' to perform common maintenance operations., 2006-12-27) as the main repository maintenance utility. And while the tool has of course evolved since then to cover new parts, the basic strategy it uses has never really changed much. It is safe to say that since 2006 the Git ecosystem has changed quite a bit. Repositories tend to be much larger nowadays than they have been almost 20 years ago, and large parts of the industry went crazy for monorepos (for various wildly different definitions of "monorepo"). So the maintenance strategy we used back then may not be the best fit nowadays anymore. Arguably, most of the maintenance tasks that git-gc(1) does are still perfectly fine today: repacking references, expiring various data structures and things like tend to not cause huge problems. But the big exception is the way we repack objects. git-gc(1) by default uses a split strategy: it performs incremental repacks by default, and then whenever we have too many packs we perform a large all-into-one repack. This all-into-one repack is what is causing problems nowadays, as it is an operation that is quite expensive. While it is wasteful in small- and medium-sized repositories, in large repos it may even be prohibitively expensive. We have eventually introduced git-maintenance(1) that was slated as a replacement for git-gc(1). In contrast to git-gc(1), it is much more flexible as it is structured around configurable tasks and strategies. So while its default "gc" strategy still uses git-gc(1) under the hood, it allows us to iterate. A second strategy it knows about is the "incremental" strategy, which we configure when registering a repository for scheduled maintenance. This strategy isn't really a full replacement for git-gc(1) though, as it doesn't know to expire unused data structures. In Git 2.52 we have thus introduced a new "geometric" strategy that is a proper replacement for the old git-gc(1). In contrast to the incremental/all-into-one split used by git-gc(1), the new "geometric" strategy maintains a geometric progression of packfiles, which significantly reduces the number of all-into-one repacks that we have to perform in large repositories. It is thus a much better fit for large repositories than git-gc(1). Note that the "geometric" strategy isn't perfect though: while we perform way less all-into-one repacks compared to git-gc(1), we still have to perform them eventually. But for the largest repositories out there this may not be an option either, as client machines might not be powerful enough to perform such a repack in the first place. These cases would thus still be covered by the "incremental" strategy. Switch the default strategy away from "gc" to "geometric", but retain the "incremental" strategy configured when registering background maintenance with `git maintenance register`. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24t7900: prepare for switch of the default strategyPatrick Steinhardt
The t7900 test suite is exercising git-maintenance(1) and is thus of course heavily reliant on the exact maintenance strategy. This reliance comes in two flavors: - One test explicitly wants to verify that git-gc(1) is run as part of `git maintenance run`. This test is adapted by explicitly picking the "gc" strategy. - The other tests assume a specific shape of the object database, which is dependent on whether or not we run auto-maintenance before we come to the actual subject under test. These tests are adapted by disabling auto-maintenance. With these changes t7900 passes with both "gc" and "geometric" default strategies. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24t6500: explicitly use "gc" strategyPatrick Steinhardt
The test in t6500 explicitly wants to exercise git-gc(1) and is thus highly specific to the actual on-disk state of the repository and specifically of the object database. An upcoming change modifies the default maintenance strategy to be the "geometric" strategy though, which breaks a couple of assumptions. One fix would arguably be to disable auto-maintenance altogether, as we do want to explicitly verify git-gc(1) anyway. But as the whole test suite is about git-gc(1) in the first place it feels more sensible to configure the default maintenance strategy to be "gc". Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24t5510: explicitly use "gc" strategyPatrick Steinhardt
One of the tests in t5510 wants to verify that auto-gc does not lock up when fetching into a repository. Adapt it to explicitly pick the "gc" strategy for auto-maintenance. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24t5400: explicitly use "gc" strategyPatrick Steinhardt
In t5400 we verify that git-receive-pack(1) runs automated repository maintenance in the remote repository. The check is performed indirectly by observing an effect that git-gc(1) would have, namely to prune a temporary object from the object database. In a subsequent commit we're about to switch to the "geometric" strategy by default though, and here we stop observing that effect. Adapt the test to explicitly use the "gc" strategy to prepare for that upcoming change. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-02-24t34xx: don't expire reflogs where it mattersPatrick Steinhardt
We have a couple of tests in the t34xx range that rely on reflogs. This never really used to be a problem, but in a subsequent commit we will change the default maintenance strategy from "gc" to "geometric", and this will cause us to drop all reflogs in these tests. This may seem surprising and like a bug at first, but it's actually not. The main difference between these two strategies is that the "gc" strategy will skip all maintenance in case the object database is in a well-optimized state. The "geometric" strategy has separate subtasks though, and the conditions for each of these tasks is evaluated on a case by case basis. This means that even if the object database is in good shape, we may still decide to expire reflogs. So why is that a problem? The issue is that Git's test suite hardcodes the committer and author dates to a date in 2005. Interestingly though, these hardcoded dates not only impact the commits, but also the reflog entries. The consequence is that all newly written reflog entries are immediately considered stale as our reflog expiration threshold is in the range of weeks, only. It follows that executing `git reflog expire` will thus immediately purge all reflog entries. This hasn't been a problem in our test suite by pure chance, as the repository shapes simply didn't cause us to perform actual garbage collection. But with the upcoming "geometric" strategy we _will_ start to execute `git reflog expire`, thus surfacing this issue. Prepare for this by explicitly disabling reflog expiration in tests impacted by this upcoming change. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>