Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 25 additions & 0 deletions .github/workflows/codespell.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Codespell configuration is within pyproject.toml
---
name: Codespell

on:
push:
branches: [master]
pull_request:
branches: [master]

permissions:
contents: read

jobs:
codespell:
name: Check for spelling errors
runs-on: ubuntu-latest

steps:
- name: Checkout
uses: actions/checkout@v4
- name: Annotate locations with typos
uses: codespell-project/codespell-problem-matcher@v1
- name: Codespell
uses: codespell-project/actions-codespell@v2
2 changes: 1 addition & 1 deletion bazelisk.py
Original file line number Diff line number Diff line change
Expand Up @@ -177,7 +177,7 @@ def get_version_history(bazelisk_directory):
if not release["prerelease"]
),
# This only handles versions with numeric components, but that is fine
# since prerelease verisons have been excluded.
# since prerelease versions have been excluded.
key=lambda version: tuple(int(component)
for component in version.split('.')),
reverse=True,
Expand Down
2 changes: 1 addition & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -363,7 +363,7 @@ def _python_type_to_xref(
'desc': 'C++ class',
},
'absl::StatusOr': {
'url': 'https://abseil.io/docs/cpp/guides/statuss#returning-a-status-or-a-value',
'url': 'https://abseil.io/docs/cpp/guides/status#returning-a-status-or-a-value',
'object_type': 'class',
'desc': 'C++ class',
},
Expand Down
4 changes: 2 additions & 2 deletions docs/doctest_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
This allows doctest examples to be conveniently updated in case of changes to
the output format.

After commiting or staging changes, you can run this with the `--in-place`
After committing or staging changes, you can run this with the `--in-place`
option and then inspect the diff.

This supports top-level `await` in tests, since that provides a convenient way
Expand Down Expand Up @@ -327,7 +327,7 @@ def _ast_asyncify(code: str, wrapper_name: str) -> ast.Module:
This is derived from a similar workaround in IPython.

Args:
code: Python source code to pase.
code: Python source code to pass.
wrapper_name: Name to use for function.

Returns:
Expand Down
2 changes: 1 addition & 1 deletion docs/environment.rst
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ Debugging
Enables debug logging for tensorstore internal subsystems. Set to comma
separated list of values, where each value is one of ``name=int`` or just
``name``. When ``all`` is, present, then verbose logging will be enabled for
each subsytem, otherwise logging is set only for those subsystems present in
each subsystem, otherwise logging is set only for those subsystems present in
the list.

Verbose flag values include: ``curl``, ``distributed``, ``file``,
Expand Down
2 changes: 1 addition & 1 deletion docs/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -264,7 +264,7 @@ The following CMake generators are supported:
- Ninja and Ninja Multi-Config
- Makefile generators
- Visual Studio generators
- Xcode (targetting arm64 only)
- Xcode (targeting arm64 only)

The Ninja generator is recommended because it provides the fastest builds.

Expand Down
2 changes: 1 addition & 1 deletion docs/python/indexing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -423,7 +423,7 @@ Ellipsis

Specifying the special `Ellipsis` value (:python:`...`) is equivalent
to specifying as many full slices :python:`:` as needed to consume the
remaining dimensions of the original domin not consumed by other
remaining dimensions of the original domain not consumed by other
indexing terms:

.. doctest::
Expand Down
2 changes: 1 addition & 1 deletion docs/schema_schema.yml
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ definitions:
- A numerical :literal:`multiplier`, represented as a `double-precision
floating-point number
<https://en.wikipedia.org/wiki/Double-precision_floating-point_format>`_.
A multiplier of ``1`` may be used to indicate a quanity equal to a
A multiplier of ``1`` may be used to indicate a quantity equal to a
single base unit.

- A :literal:`base_unit`, represented as a string. An empty string may be used
Expand Down
8 changes: 4 additions & 4 deletions examples/image_convolution.cc
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ tensorstore::SharedArray<int, 2> ApplyKernel(
const tensorstore::ArrayView<const int, 2> in,
const tensorstore::ArrayView<const double, 2> kernel) {
// Compute bounds for the offset.
// FIXME: It's akward that we cannot do this:
// FIXME: It's awkward that we cannot do this:
// std::array<Index, 2> k(kernel.shape());
//
std::array<Index, 2> k;
Expand Down Expand Up @@ -80,7 +80,7 @@ tensorstore::SharedArray<int, 2> ApplyKernel(
}
});

// Again, the most intutive way to write an array value
// Again, the most intuitive way to write an array value
// is not permitted:
//
// dest[indices] = (sum / count); // error: no viable overloaded '='
Expand Down Expand Up @@ -154,7 +154,7 @@ void AffineWarpGrid(size_t xmax, size_t ymax,
}

// AffineWarpInverseGrid computes the inverse mapping from AffineWarpGrid,
// so it can be used to map from a destination image to a souce image.
// so it can be used to map from a destination image to a source image.
void AffineWarpInverseGrid(size_t xmax, size_t ymax,
tensorstore::span<const double, 6> M,
AffineWarpGridFunction fn) {
Expand Down Expand Up @@ -237,7 +237,7 @@ void PrintCSVArray(tensorstore::ArrayView<T, N> data) {
// reference for every element.
//
// There is a streaming operator already, but the output is
// this is equvalent to:
// this is equivalent to:
// for (int x = 0; x < data.shape()[0]; x++)
// for (int y = 0; y < data.shape()[1]; y++) {
// ... body ...
Expand Down
8 changes: 8 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -39,3 +39,11 @@ version_scheme = "no-guess-dev"
# Test PyPI does not support local versions.
local_scheme = "no-local-version"
fallback_version = "0.0.0"

[tool.codespell]
# Ref: https://github.com/codespell-project/codespell#using-a-config-file
skip = '.git*,third_party'
check-hidden = true
# Do not bother with mixed case words -- variables, and test lines
ignore-regex = '\b[a-zA-Z]+[A-Z][a-z]*\b|\bEXPECT_.*'
ignore-words-list = 'ehr,ans'
2 changes: 1 addition & 1 deletion python/tensorstore/array_type_caster.cc
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,7 @@ pybind11::object GetNumpyArrayImpl(SharedArrayView<const void> value,
auto obj = py::reinterpret_steal<py::array>(PyArray_NewFromDescr(
/*subtype=*/&PyArray_Type,
/*descr=*/reinterpret_cast<PyArray_Descr*>(py_dtype.release().ptr()),
/*nd=*/static_cast<int>(value.rank()),
/*nd=*/static_cast<int>(value.rank()), // codespell:ignore nd
/*dims=*/shape,
/*strides=*/strides,
/*data=*/const_cast<void*>(value.data()), flags, nullptr));
Expand Down
2 changes: 1 addition & 1 deletion python/tensorstore/keyword_arguments.h
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
/// This mechanism allows individual keyword arguments to be defined as
/// `ParamDef` types, that specifies the name, documentation, argument type, and
/// operation to perform to "apply" the argument. These `ParamDef` types can
/// then be re-used by multiple pybind11 functions while avoiding duplication.
/// then be reused by multiple pybind11 functions while avoiding duplication.
///
/// Each `ParamDef` type should be a struct with the following members, and no
/// non-static data members (i.e. it should be empty):
Expand Down
2 changes: 1 addition & 1 deletion python/tensorstore/keyword_arguments_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ Does something or other with keyword arguments.
// Args:
// required_arg: This is required
//
// a: Specifies a. This documentaiton string is allowed
// a: Specifies a. This documentation string is allowed
// to be more than one line.
// b: Specifies b.
//
Expand Down
2 changes: 1 addition & 1 deletion python/tensorstore/tests/exit_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ def run_during_finalization():
async def test_read():
t = ts.array([1, 2, 3], dtype=ts.int64)
await asyncio.wait_for(t.read(), timeout=1)
# Normally, await won't suceed. However, await may still succeed, if it
# Normally, await won't succeed. However, await may still succeed, if it
# happens that the read completed before the call to `await`.
os._exit(0)

Expand Down
2 changes: 1 addition & 1 deletion python/tensorstore/unit.cc
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ The quantity is specified as the combination of:

- A numerical :py:obj:`.multiplier`, represented as a
`double-precision floating-point number <https://en.wikipedia.org/wiki/Double-precision_floating-point_format>`_.
A multiplier of :python:`1` may be used to indicate a quanity equal to a
A multiplier of :python:`1` may be used to indicate a quantity equal to a
single base unit.

- A :py:obj:`.base_unit`, represented as a string. An empty string may be used
Expand Down
6 changes: 3 additions & 3 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ def _get_action_env():
Unfortunately there is no kosher way to detect the PEP517 build environment,
so this is a heuristic approach based on inspection of the PATH variable
passed to the build and review of the PIP sources. The source, as reviewed,
creates a temporary directory inclusing pip-{kind}, where kind=build-env.
creates a temporary directory including pip-{kind}, where kind=build-env.

See: dist-packages/pip/_internal/utils/temp_dir.py
Also: https://github.com/bazelbuild/bazel/issues/18809
Expand All @@ -178,7 +178,7 @@ def _get_action_env():
if not build_env:
return []

# There may be mutliple path entries added under the build-env directory,
# There may be multiple path entries added under the build-env directory,
# so remove them all.
while 'pip-build-env' in os.path.dirname(build_env):
build_env = os.path.dirname(build_env)
Expand Down Expand Up @@ -216,7 +216,7 @@ def run(self):
# from the PATH as bazel is already hermetic to improve cache use.
action_env = _get_action_env()

# Ensure python_configure.bzl finds the correct Python verison.
# Ensure python_configure.bzl finds the correct Python version.
os.environ['PYTHON_BIN_PATH'] = sys.executable

# Ensure it is built against the version of `numpy` in the current
Expand Down
2 changes: 1 addition & 1 deletion tensorstore/context.cc
Original file line number Diff line number Diff line change
Expand Up @@ -392,7 +392,7 @@ class ResourceReference : public ResourceSpecImplBase {
ResourceSpecImplPtr UnbindContext(
const internal::ContextSpecBuilder& spec_builder) final {
auto& builder_impl = *internal_context::Access::impl(spec_builder);
// Ensure the referent is not re-used as an identifier for another resource.
// Ensure the referent is not reused as an identifier for another resource.
++builder_impl.ids_[referent_];
return ResourceSpecImplPtr(this);
}
Expand Down
2 changes: 1 addition & 1 deletion tensorstore/contiguous_layout.h
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ bool PermutationMatchesOrder(span<const DimensionIndex> permutation,
void InvertPermutation(DimensionIndex rank, const DimensionIndex* perm,
DimensionIndex* inverse_perm);

/// Normalizes `source` to a permutation if it is not already a permuation.
/// Normalizes `source` to a permutation if it is not already a permutation.
///
/// \relates ContiguousLayoutPermutation
template <typename LayoutOrder, DimensionIndex Rank>
Expand Down
4 changes: 2 additions & 2 deletions tensorstore/driver/downsample/downsample_util.h
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,8 @@ struct PropagatedIndexTransformDownsampling {
///
/// is equivalent to:
///
/// transforming `b` by `propgated.transform` and then downsampling by
/// `propgated.input_downsample_factors` (and possibly "squeezing" some
/// transforming `b` by `propagated.transform` and then downsampling by
/// `propagated.input_downsample_factors` (and possibly "squeezing" some
/// singleton dimensions that were added).
///
/// Note that this function assumes downsampling is performed using a method
Expand Down
2 changes: 1 addition & 1 deletion tensorstore/driver/kvs_backed_chunk_driver.h
Original file line number Diff line number Diff line change
Expand Up @@ -853,7 +853,7 @@ class OpenState : public MetadataOpenState {
/// is compatible with the open request by calling
/// `open_state->GetComponentIndex`.
///
/// - If it is, either re-uses an existing `DataCache` with a cache key that
/// - If it is, either reuses an existing `DataCache` with a cache key that
/// matches `open_state->GetDataCacheKey`, or obtain a new `DataCache` from
/// `open_state->GetDataCache`.
///
Expand Down
2 changes: 1 addition & 1 deletion tensorstore/driver/n5/schema.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ allOf:
metadata:
title: N5 array metadata.
description: |
Specifies constraints on the metdata of a dataset exactly as in the
Specifies constraints on the metadata of a dataset exactly as in the
`attributes.json file
<https://github.com/saalfeldlab/n5#file-system-specification-version-203-snapshot>`_,
except that all members are optional. When creating a new array, the
Expand Down
2 changes: 1 addition & 1 deletion tensorstore/driver/neuroglancer_precomputed/metadata.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1481,7 +1481,7 @@ bool GetShardChunkHierarchy(const ShardingSpec& sharding_spec,
ShardChunkHierarchy& hierarchy) {
if (sharding_spec.hash_function != ShardingSpec::HashFunction::identity) {
// For non-identity hash functions, the number of chunks per shard is not
// predicable and the shard doesn't correspond to a rectangular region
// predictable and the shard doesn't correspond to a rectangular region
// anyway.
return false;
}
Expand Down
2 changes: 1 addition & 1 deletion tensorstore/index_interval_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -387,7 +387,7 @@ TEST(IndexIntervalTest, IntersectPreferringExplicit) {
OIII{IndexInterval::UncheckedClosed(0, 10), false, false}),
::testing::Eq(OIII{IndexInterval::UncheckedClosed(0, 10), false, false}));

// These may surprise you! explicit takes prededence over implicit!
// These may surprise you! explicit takes precedence over implicit!
EXPECT_THAT(
IntersectPreferringExplicit(
OIII{IndexInterval::UncheckedClosed(-kInfIndex, kMaxFiniteIndex),
Expand Down
2 changes: 1 addition & 1 deletion tensorstore/index_space/internal/numpy_indexing_spec.cc
Original file line number Diff line number Diff line change
Expand Up @@ -281,7 +281,7 @@ void GetIndexedInputDims(const NumpyIndexingSpec& spec,
}
input_dims_per_intermediate_dim[intermediate_rank] = input_dim;

// Compute `indexed_input_dims` by reodering `input_dims_per_intermediate_dim`
// Compute `indexed_input_dims` by reordering `input_dims_per_intermediate_dim`
// by `selected_dims`.
for (const DimensionIndex intermediate_dim : selected_dims) {
for (DimensionIndex
Expand Down
2 changes: 1 addition & 1 deletion tensorstore/index_space/internal/propagate_bounds.cc
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ absl::Status PropagateBoundsImpl(BoxView<> b,
for (DimensionIndex b_dim = 0; b_dim < b.rank(); ++b_dim) {
auto& map = maps[b_dim];
const Index output_stride = map.stride();
// We dont't validate or propagate bounds for array-based output index maps.
// We don't validate or propagate bounds for array-based output index maps.
if (map.method() == OutputIndexMethod::array) continue;
OptionallyImplicitIndexInterval b_bounds_oi{b[b_dim],
b_implicit_lower_bounds[b_dim],
Expand Down
4 changes: 2 additions & 2 deletions tensorstore/index_space/internal/transform_rep.h
Original file line number Diff line number Diff line change
Expand Up @@ -368,8 +368,8 @@ inline void NormalizeImplicitBounds(TransformRep& rep) {
// Check that OutputIndexMap and std::string don't have a greater alignment
// value than Index, as that would require more complicated logic for accessing
// the variable length fields than is currently implemented. In practice these
// constraints should always be satisified. If this code needs to work on a
// platform that doesn't satisfy these contraints, the more complicated logic
// constraints should always be satisfied. If this code needs to work on a
// platform that doesn't satisfy these constraints, the more complicated logic
// could be implemented.
static_assert(alignof(OutputIndexMap) <= sizeof(Index),
"Platform has unsupported alignment.");
Expand Down
2 changes: 1 addition & 1 deletion tensorstore/internal/cache/async_cache.h
Original file line number Diff line number Diff line change
Expand Up @@ -378,7 +378,7 @@ class AsyncCache : public Cache {
/// \param transaction[in,out] Transaction associated with the entry. If
/// non-null, must specify an explicit transaction, and an associated
/// transaction node will be created if one does not already exist. In
/// this case, the `tranaction` pointer itself will not be modified. An
/// this case, the `transaction` pointer itself will not be modified. An
/// implicit transaction node associated with a new implicit transaction
/// is requested by specifying `transaction` initially equally to
/// `nullptr`. Upon return, `transaction` will hold an open transaction
Expand Down
2 changes: 1 addition & 1 deletion tensorstore/internal/cache/cache_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ class TestCache : public Cache {
std::deque<std::string> entry_allocate_log;
// Log of calls to DoDeleteEntry. Contains the cache key and entry key.
std::deque<std::pair<std::string, std::string>> entry_destroy_log;
// Log of calls to GetTestCache (defined below). ontains the cache key.
// Log of calls to GetTestCache (defined below). contains the cache key.
std::deque<std::string> cache_allocate_log;
// Log of calls to TestCache destructor. Contains the cache key.
std::deque<std::string> cache_destroy_log;
Expand Down
2 changes: 1 addition & 1 deletion tensorstore/internal/container/compressed_tuple_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -308,7 +308,7 @@ TEST(CompressedTupleTest, Nested) {
std::set<Empty<0>*> empties{&y.get<0>(), &y.get<1>(), &y.get<2>().get<0>(),
&y.get<2>().get<1>().get<0>()};
#ifdef _MSC_VER
// MSVC has a bug where many instances of the same base class are layed out in
// MSVC has a bug where many instances of the same base class are laid out in
// the same address when using __declspec(empty_bases).
// This will be fixed in a future version of MSVC.
int expected = 1;
Expand Down
2 changes: 1 addition & 1 deletion tensorstore/internal/curl/curl_factory.h
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ namespace internal_http {
/// CurlHandleFactory creates and cleans up CURL* (CurlPtr) handles
/// and CURLM* (CurlMulti) handles.
///
/// NOTE: These methods are virtual so that a curl factory can re-use
/// NOTE: These methods are virtual so that a curl factory can reuse
/// curl handles.
class CurlHandleFactory {
public:
Expand Down
2 changes: 1 addition & 1 deletion tensorstore/internal/file_io_concurrency_resource.cc
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ namespace {
struct FileIoConcurrencyResourceTraits
: public ConcurrencyResourceTraits,
public ContextResourceTraits<FileIoConcurrencyResource> {
// TODO(jbms): use beter method of picking concurrency limit
// TODO(jbms): use better method of picking concurrency limit
FileIoConcurrencyResourceTraits()
: ConcurrencyResourceTraits(
std::max(size_t(4), size_t(std::thread::hardware_concurrency()))) {}
Expand Down
2 changes: 1 addition & 1 deletion tensorstore/internal/image/image_reader_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ TEST_P(ReaderTest, ReadImageTruncated) {
}

// Most images generated via tiffcp <source> <options> <dest>.
// Query image paramters using tiffinfo <image>
// Query image parameters using tiffinfo <image>
std ::vector<V> GetD75_08_Values() {
return {
// upper-left corner: hw=0,0 => 151,75,83
Expand Down
2 changes: 1 addition & 1 deletion tensorstore/internal/image/image_writer_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -234,7 +234,7 @@ TEST_P(WriterTest, RoundTrip) {

double rmse = ComputeRMSE(decoded.data(), source.data(), source.size());

/// When RMSE is not 0, verify that the actual value is witin 5%.
/// When RMSE is not 0, verify that the actual value is within 5%.
if (GetParam().rmse_error_limit == 0) {
EXPECT_EQ(0, rmse) << "\nA: " << source_info << " "
<< "\nB: " << decoded_info;
Expand Down
2 changes: 1 addition & 1 deletion tensorstore/internal/json_binding/json_binding.h
Original file line number Diff line number Diff line change
Expand Up @@ -484,7 +484,7 @@ constexpr auto Projection(Proj projection, Binder binder = DefaultBinder<>) {
};
}

/// Binder adapter that projects the parsed representation using gettter/setter
/// Binder adapter that projects the parsed representation using getter/setter
/// functions.
///
/// Commonly this is used with `Member`, in order to bind a
Expand Down
2 changes: 1 addition & 1 deletion tensorstore/internal/riegeli/find.cc
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ bool StartsWith(riegeli::Reader &reader, std::string_view needle) {
memcmp(reader.cursor(), needle.data(), needle.size()) == 0;
}

/// Seeks for the first occurence of data string starting from the current pos.
/// Seeks for the first occurrence of data string starting from the current pos.
/// This works well enough for ZIP archives, since the tags do not have
/// internal repetition.
bool FindFirst(riegeli::Reader &reader, std::string_view needle) {
Expand Down
Loading