Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/scripts/gcs/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,7 @@ test-int:
export MULTIREGIONAL_BUCKET_NAME="$$(cat $(TMP_DIR)/multiregional.lock)" && \
export REGIONAL_BUCKET_NAME="$$(cat $(TMP_DIR)/regional.lock)" && \
export PUBLIC_BUCKET_NAME="$$(cat $(TMP_DIR)/public.lock)" && \
export SKIP_LONG_TESTS="" && \
cd ../../.. && go run github.com/onsi/ginkgo/v2/ginkgo gcs/integration/

# Perform all non-long tests, including integration tests.
Expand Down
9 changes: 6 additions & 3 deletions .github/scripts/s3/run-integration-aws-iam.sh
Original file line number Diff line number Diff line change
Expand Up @@ -33,10 +33,12 @@ trap "cat ${lambda_log}" EXIT
# Go to the repository root (3 levels up from script directory)

pushd "${repo_root}" > /dev/null

export CGO_ENABLED=0
export GOOS=linux
export GOARCH=amd64
echo -e "\n building artifact with $(go version)..."
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o out/s3cli ./s3
CGO_ENABLED=0 ginkgo build s3/integration
go build -o out/s3cli
ginkgo build s3/integration

zip -j payload.zip s3/integration/integration.test out/s3cli ${script_dir}/assets/lambda_function.py

Expand All @@ -51,6 +53,7 @@ pushd "${repo_root}" > /dev/null
--handler lambda_function.test_runner_handler \
--runtime python3.9

echo "Create done, invoking Lambda function ${lambda_function_name}..."
set +e
tries=0
get_function_status_command="aws lambda get-function --region ${region_name} --function-name ${lambda_function_name}"
Expand Down
96 changes: 8 additions & 88 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,102 +17,22 @@ jobs:
with:
go-version-file: 'go.mod'

- name: Alioss CLI Build for Linux
- name: Storage-CLI Build for Linux
env:
GOOS: linux
GOARCH: amd64
CGO_ENABLED: 0
run: |
echo "Building Alioss CLI for Linux"
go build -o "alioss-cli-linux-amd64" ./alioss
sha1sum "alioss-cli-linux-amd64"
echo "Building Storage CLI for Linux"
go build -o "storage-cli-linux-amd64"
sha1sum "storage-cli-linux-amd64"

- name: Alioss CLI Build for Windows
- name: Storage-CLI Build for Windows
env:
GOOS: windows
GOARCH: amd64
CGO_ENABLED: 0
run: |
echo "Building Alioss CLI for Windows"
go build -o "alioss-cli-windows-amd64.exe" ./alioss
sha1sum "alioss-cli-windows-amd64.exe"

- name: Azurebs CLI Build for Linux
env:
GOOS: linux
GOARCH: amd64
CGO_ENABLED: 0
run: |
echo "Building Azurebs CLI for Linux"
go build -o "azurebs-cli-linux-amd64" ./azurebs
sha1sum "azurebs-cli-linux-amd64"

- name: Azurebs CLI Build for Windows
env:
GOOS: windows
GOARCH: amd64
CGO_ENABLED: 0
run: |
echo "Building Azurebs CLI for Windows"
go build -o "azurebs-cli-windows-amd64.exe" ./azurebs
sha1sum "azurebs-cli-windows-amd64.exe"

- name: Dav CLI Build for Linux
env:
GOOS: linux
GOARCH: amd64
CGO_ENABLED: 0
run: |
echo "Building Dav CLI for Linux"
go build -o "dav-cli-linux-amd64" ./dav/main
sha1sum "dav-cli-linux-amd64"

- name: Dav CLI Build for Windows
env:
GOOS: windows
GOARCH: amd64
CGO_ENABLED: 0
run: |
echo "Building Dav CLI for Windows"
go build -o "dav-cli-windows-amd64.exe" ./dav/main
sha1sum "dav-cli-windows-amd64.exe"

- name: GCS CLI Build for Linux
env:
GOOS: linux
GOARCH: amd64
CGO_ENABLED: 0
run: |
echo "Building Gcs CLI for Linux"
go build -o "gcs-cli-linux-amd64" ./gcs
sha1sum "gcs-cli-linux-amd64"

- name: GCS CLI Build for Windows
env:
GOOS: windows
GOARCH: amd64
CGO_ENABLED: 0
run: |
echo "Building Gcs CLI for Windows"
go build -o "gcs-cli-windows-amd64.exe" ./gcs
sha1sum "gcs-cli-windows-amd64.exe"

- name: S3 CLI Build for Linux
env:
GOOS: linux
GOARCH: amd64
CGO_ENABLED: 0
run: |
echo "Building S3 CLI for Linux"
go build -o "s3-cli-linux-amd64" ./s3
sha1sum "s3-cli-linux-amd64"

- name: S3 CLI Build for Windows
env:
GOOS: windows
GOARCH: amd64
CGO_ENABLED: 0
run: |
echo "Building S3 CLI for Windows"
go build -o "s3-cli-windows-amd64.exe" ./s3
sha1sum "s3-cli-windows-amd64.exe"
echo "Building Storage CLI for Windows"
go build -o "storage-cli-windows-amd64.exe"
sha1sum "storage-cli-windows-amd64.exe"
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -27,3 +27,5 @@
# IDE/editor
.vscode/
.idea/

.DS_Store
92 changes: 84 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,17 @@
# Storage CLI
This repository consolidates five independent blob-storage CLIs, one per provider, into a single codebase. Each provider has its own dedicated directory (azurebs/, s3/, gcs/, alioss/, dav/), containing an independent main package and implementation. The tools are intentionally maintained as separate binaries, preserving each provider’s native SDK, command-line flags, and operational semantics. Each CLI exposes similar high-level operations (e.g., put, get, delete).
A unified command-line tool for interacting with multiple cloud storage providers through a single binary. The CLI supports five blob-storage providers (Azure Blob Storage, AWS S3, Google Cloud Storage, Alibaba Cloud OSS, and WebDAV), each with its own client implementation while sharing a common command interface.

**Note:** This CLI works with existing storage resources (buckets, containers, etc.) that are already created and configured in your cloud provider. The storage bucket/container name and credentials must be specified in the provider-specific configuration file.

Key points

- Each provider builds independently.
- Single binary with provider selection via `-s` flag.

- Client setup, config, and options are contained within the provider’s folder.
- Each provider has its own directory (azurebs/, s3/, gcs/, alioss/, dav/) containing client implementations and configurations.

- All tools support the same core commands (such as put, get, and delete) for a familiar workflow, while each provider defines its own flags, parameters, and execution flow that align with its native SDK and terminology.
- All providers support the same core commands (put, get, delete, exists, list, copy, etc.).

- Central issue tracking, shared CI, and aligned release process without merging implementations.
- Provider-specific configurations are passed via JSON config files.


## Providers
Expand All @@ -21,12 +23,86 @@ Key points


## Build
Use following command to build it locally
Build the unified storage CLI binary:

```shell
go build -o storage-cli
```

Or with version information:
```shell
go build -o <provider-folder-name>/<build-name> <provider-folder-name>/main.go
go build -ldflags "-X main.version=1.0.0" -o storage-cli
```
e.g. `go build -o alioss/alioss-cli alioss/main.go`

## Usage

The CLI uses a unified command structure across all providers:

```shell
storage-cli -s <provider> -c <config-file> <command> [arguments]
```

**Flags:**
- `-s`: Storage provider type (azurebs|s3|gcs|alioss|dav)
- `-c`: Path to provider-specific configuration file
- `-v`: Show version

**Common commands:**
- `put <path/to/file> <remote-object>` - Upload a local file to remote storage
- `get <remote-object> <path/to/file>` - Download a remote object to local file
- `delete <remote-object>` - Delete a remote object
- `delete-recursive [prefix]` - Delete objects recursively. If prefix is omitted, deletes all objects
- `exists <remote-object>` - Check if a remote object exists (exits with code 3 if not found)
- `list [prefix]` - List remote objects. If prefix is omitted, lists all objects
- `copy <source-object> <destination-object>` - Copy object within the same storage
- `sign <object> <action> <duration_as_second>` - Generate signed URL (action: get|put, duration: e.g., 60s)
- `properties <remote-object>` - Display properties/metadata of a remote object
- `ensure-storage-exists` - Ensure the storage container/bucket exists

**Examples:**
```shell
# Upload file to S3
storage-cli -s s3 -c s3-config.json put local-file.txt remote-object.txt

# List GCS objects with prefix
storage-cli -s gcs -c gcs-config.json list my-prefix

# Check if Azure blob exists
storage-cli -s azurebs -c azure-config.json exists my-blob.txt

# Get properties of an object
storage-cli -s azurebs -c azure-config.json properties my-blob.txt

# Sign object for 'get' in alioss for 60 seconds
storage-cli -s alioss -c alioss-config.json sign object.txt get 60s
```

## Contributing

Follow these steps to make a contribution to the project:

- Fork this repository
- Create a feature branch based upon the `main` branch (*pull requests must be made against this branch*)
``` bash
git checkout -b feature-name origin/main
```
- Run tests to check your development environment setup
``` bash
ginkgo --race --skip-package=integration --cover -v -r ./...
```
- Make your changes (*be sure to add/update tests*)
- Run tests to check your changes
``` bash
ginkgo --race --skip-package=integration --cover -v -r ./...
```
- If you added or modified integration tests, to run them locally, follow the instructions in the provider-specific README (see [Providers](#providers) section)
- Push changes to your fork
``` bash
git add .
git commit -m "Commit message"
git push origin feature-name
```
- Create a GitHub pull request, selecting `main` as the target branch


## Notes
Expand Down
63 changes: 30 additions & 33 deletions alioss/README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,14 @@
# Ali Storage CLI
# Alibaba Cloud OSS Client

The Ali Storage CLI is for uploading, fetching and deleting content to and from an Ali OSS.
It is highly inspired by the [storage-cli/s3](https://github.com/cloudfoundry/storage-cli/blob/6058f516e9b81471b64a50b01e228158a05731f0/s3)
Alibaba Cloud OSS (Object Storage Service) client implementation for the unified storage-cli tool. This module provides Alibaba Cloud OSS operations through the main storage-cli binary.

## Usage
**Note:** This is not a standalone CLI. Use the main `storage-cli` binary with `-s alioss` flag to access AliOSS functionality.

Given a JSON config file (`config.json`)...
For general usage and build instructions, see the [main README](../README.md).

## AliOSS-Specific Configuration

The AliOSS client requires a JSON configuration file with the following structure:

``` json
{
Expand All @@ -16,46 +19,40 @@ Given a JSON config file (`config.json`)...
}
```

**Usage examples:**
``` bash
# Command: "put"
# Upload a blob to the blobstore.
./alioss-cli -c config.json put <path/to/file> <remote-blob>

# Command: "get"
# Fetch a blob from the blobstore.
# Destination file will be overwritten if exists.
./alioss-cli -c config.json get <remote-blob> <path/to/file>

# Command: "delete"
# Remove a blob from the blobstore.
./alioss-cli -c config.json delete <remote-blob>

# Command: "exists"
# Checks if blob exists in the blobstore.
./alioss-cli -c config.json exists <remote-blob>

# Command: "sign"
# Create a self-signed url for a blob in the blobstore.
./alioss-cli -c config.json sign <remote-blob> <get|put> <seconds-to-expiration>
# Upload a blob
storage-cli -s alioss -c alioss-config.json put local-file.txt remote-blob

# Fetch a blob (destination file will be overwritten if exists)
storage-cli -s alioss -c alioss-config.json get remote-blob local-file.txt

# Delete a blob
storage-cli -s alioss -c alioss-config.json delete remote-blob

# Check if blob exists
storage-cli -s alioss -c alioss-config.json exists remote-blob

# Generate a signed URL (e.g., GET for 3600 seconds)
storage-cli -s alioss -c alioss-config.json sign remote-blob get 3600s
```

### Using signed urls with curl
### Using Signed URLs with curl
``` bash
# Uploading a blob:
curl -X PUT -T path/to/file <signed url>
curl -X PUT -T path/to/file <signed-url>

# Downloading a blob:
curl -X GET <signed url>
curl -X GET <signed-url>
```
## Running Tests

## Testing

### Unit Tests
**Note:** Run the following commands from the repository root directory.
Run from the repository root directory:

```bash
go install github.com/onsi/ginkgo/v2/ginkgo

ginkgo --skip-package=integration --randomize-all --cover -v -r ./alioss/...
ginkgo --skip-package=integration --cover -v -r ./alioss/...
```
### Integration Tests

Expand Down
Loading
Loading