A server that implements the Google Cloud Storage API. At this time, it is a wrapper that translates GCS JSON API calls to S3 calls.
- Set up a PostgreSQL server to use for storing bucket metadata and session state.
- Set up an S3-compatible server to use for storing objects.
- In
settings.cfg, specifyS3_ADMIN_CREDS,S3_HOST,S3_PORT,POSTGRES_DB, and define at least one user entry. Use the private key defined in a JSON service credentials file to generate the certificate file. - Install AppScale Cloud Storage with
python3 setup.py install. Using a virtualenv is recommended. - Run
appscale-prime-cloud-storageto generate the required Postgres tables. - Define the following environment variables:
export FLASK_APP=appscale.cloud_storageandexport APPSCALE_CLOUD_STORAGE_SETTINGS=/path/to/settings.cfg. - Start the server with
flask run.
You can use Google's
client libraries or the
Cloud SDK to interact with the server. For
authentication, create or use existing JSON service account credentials with
auth_uri and token_uri pointing to the AppScale Cloud Storage server.
# Imports
from gcloud import storage
from oauth2client.service_account import ServiceAccountCredentials
# Configuration
storage.connection.Connection.API_BASE_URL = 'http://[server-address]:[port]'
SCOPES = ['https://www.googleapis.com/auth/devstorage.read_write']
SERVICE_CREDS = '/path/to/service/creds.json'
# Construct the client.
credentials = ServiceAccountCredentials.from_json_keyfile_name(
SERVICE_CREDS, scopes=scopes)
client = storage.Client(credentials=credentials)
- Due to a limitation in the S3 API, the minimum size for non-terminal chunks
when performing uploads is 5MB instead of 256KB. In Python, for example, you
can account for this by defining
chunk_size=5 << 20when creatingstorage.blob.Blobobjects.