diff --git a/BreakingChanges.txt b/BreakingChanges.txt index 86da6225..c240896d 100644 --- a/BreakingChanges.txt +++ b/BreakingChanges.txt @@ -1,6 +1,30 @@ Azure Storage Client Library for C++ History of Breaking Changes +Breaking Changes in v7.0: +- Default Rest API version is 2019-02-02. +- Upgraded Casablanca dependency to 2.10.14. +- Raised minumim required GCC version to 5.1. +- SAS returned by calling `azure::storage::cloud_blob::get_shared_access_signature` on a snapshot object only has access to the snapshot, not the entire blob including every snapshots as before. +- Fix a typo in API `azure::storage::cloud_file_share::download_share_usage_async`. +- Fix a typo in API `azure::storage::cloud_queue_message::next_visible_time`. +- `azure::storage::get_wastorage_ambient_scheduler` always returns by value. + +Breaking Changes in v6.0: +- `azure::storage::blob_request_options` now accept max_execution_time as `std::chrono::milliseconds`. However, previous `std::chrono::seconds` can automatically converted to `std::chrono::milliseconds`. There can be behavioral change since the precision has changed. +- Resolved an issue where the first forward slash in the front of the blob name will always be trimmed. This would cause blobs with name trimmed prior to this release no longer reachable with the same input. + +Breaking Changes in v5.0: +- Dropped Nuget package with name 'wastorage'. + +Breaking Changes in v4.0: +- `azure::storage::file::upload_properties` and `azure::storage::file::upload_properties_async` will no longer resize a file, even when properties contains length that is being set to 0 or other values. + +Breaking Changes in v3.0: +- Default Rest API version is 2016-05-31. +- Using If-None-Match: * will now fail when reading a blob. +- Rename TargetName for Debug configuration from wastorage to wastoraged. + Breaking Changes in v2.5: - Upgraded Casablanca dependency to 2.9.1 diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index e74490ac..e52f6815 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -83,7 +83,7 @@ sudo apt-get install libxml++2.6-dev libxml++2.6-doc uuid-dev cd azure-storage-cpp/Microsoft.WIndowsAzure.Storage mkdir build.release cd build.release -CASABLANCA_DIR= CXX=g++-4.8 cmake .. -DCMAKE_BUILD_TYPE=Release +CASABLANCA_DIR= CXX=g++-5.1 cmake .. -DCMAKE_BUILD_TYPE=Release make ``` In the above command, replace `` to point to your local @@ -91,7 +91,7 @@ installation of Casablanca. For example, if the file `libcpprest.so` exists at location `~/Github/Casablanca/cpprestsdk/Release/build.release/Binaries/libcpprest.so`, then your `cmake` command should be: ```bash -CASABLANCA_DIR=~/Github/Casablanca/cpprestsdk CXX=g++-4.8 cmake .. -DCMAKE_BUILD_TYPE=Release +CASABLANCA_DIR=~/Github/Casablanca/cpprestsdk CXX=g++-5.1 cmake .. -DCMAKE_BUILD_TYPE=Release ``` The library is generated under `azure-storage-cpp/Microsoft.WindowsAzure.Storage/build.release/Binaries/`. @@ -105,7 +105,7 @@ sudo apt-get install libunittest++-dev #### Build the Test Code ```bash -CASABLANCA_DIR= CXX=g++-4.8 cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTS=ON +CASABLANCA_DIR= CXX=g++-5.1 cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTS=ON make ``` The test binary `azurestoragetest` and `test_configuration.json` are generated under @@ -126,7 +126,7 @@ cd Binaries ### Samples ```bash -CASABLANCA_DIR= CXX=g++-4.8 cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_SAMPLES=ON +CASABLANCA_DIR= CXX=g++-5.1 cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_SAMPLES=ON make ``` diff --git a/Changelog.txt b/Changelog.txt index 1ec81e8b..0ef33d42 100644 --- a/Changelog.txt +++ b/Changelog.txt @@ -1,6 +1,156 @@ Azure Storage Client Library for C++ History of Changes +Changes in v7.5.0 +- New feature: Blob Versioning. +- New feature: Jumbo Put Block. +- New feature: Jumbo Put Blob. +- Service version upgraded to 2019-12-12. + +Changes in v7.4.0 +- New feature: Premium File Share Properties. +- Fixed a bug: crash in table batch operation. +- Upgraded CPPRest to latest version 2.10.16. + +Changes in v7.3.1 +- Fixed a bug: RangeNotSatisfiable exception is mistakenly swallowed. +- Fixed a bug: File length is not returned when listing files. + +Changes in v7.3.0 +- New feature: Customer provided key (CPK-V) +- New feature: File lease +- Upgraded CPPRest to latest version 2.10.15. + +Changes in v7.2.0 +- New feature: Previous snapshot with URL. +- Service version upgraded to 2019-07-07. + +Changes in v7.1.0 +- New feature: User delegation SAS. +- Fixed a bug in snapshot SAS. +- Fixed some unittest failures. +- Fixed some crash issues. +- Added an API with which user can custom HTTP connection settings. + +Changes in v7.0.0 +- Default REST API version is 2019-02-02. +- Upgraded CPPRest to latest version 2.10.14. +- Raised minimum required GCC version to 5.1. +- Added new API `azure::storage::cloud_file_share::download_share_usage_in_bytes`. +- SAS returned by calling `azure::storage::cloud_blob::get_shared_access_signature` on a snapshot object only has access to the snapshot, not the entire blob including every snapshots as before. +- Added support for AAD based OAuth bearer token authentication. +- Added support for CRCC64 transactional data integrity machanism as an alternative to MD5. +- Added new API `azure::storage::cloud_file_share::upload_file_permission`, `azure::storage::cloud_file_share::download_file_permission` to support to create/retrieve a security descriptor at the Azure File share level, +- Added support for a set of new headers on Azure File APIs. + +Changes in v6.1.0 +- Default REST API version is 2018-03-28. +- Upgraded CPPRest to latest version: 2.10.13. +- Added following new API for `cloud_blob_client`, `cloud_blob_container` and `cloud_blob` to support to get Azure Storage Account properties. + - `pplx::task download_account_properties_async(const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token)`. +- Resolved a memory leak issue on Linux platform in xml_wrapper.cpp. +- Added support to build with Visual Studio 2017. +- Resolved an issue where it has chance to report an error when trying to upload to block blob with `cloud_block_blob::open_write`. +- Resolved an issue on unexpected behavior when trying to upload from a local file whose length is longer than value of maximum int32 to block blob, append blob, page blob or Azure Storage file. + +Changes in v6.0.0 +- Upgraded CPPRest to latest version: 2.10.10. +- Resolved a memory leak issue on Linux platform in xml_wrapper.cpp. +- Resolved an issue where `download_range_to_stream` for `azure::storage::blob` or `azure::storage::file` would update the blob/file MD5 unexpectedly. +- Added an option `ENABLE_MT` in CMake to build azure-storage-cpp using /MT +- Resolved an issue where the first forward slash in the front of the blob name will always be trimmed. +- Supported cancellation token in asynchronous operations. +- Supported millisecond level timeout. + +Changes in v5.2.0 +- Resolved an issue where listing blobs for blob tier returning faulty result when built with VS2013. +- Resolved an issue where listing files or directories returning faulty server encryption status when built with VS2013. +- Added support to set blob tier with lease conditions. +- Added detailed guideline on how to build on SLES12 SP3, and fixed CMake build script accordingly. +- Fixed randomly failed test cases. + +Changes in v5.1.1 +- Added an API: `azure::storage::operation_context::set_ssl_context_callback`. User can use this API to customize SSL callback in order to change the location of the SSL cert, etc. This is a Linux only API. + +Changes in v5.0.1 +- Resolved an issue where default CMake version 2.8 on Ubuntu 14.04 cannot build this client library. + +Changes in v5.1.0 +- Stopped releasing public Nuget package starting from this version. +- Default REST API version is 2017-07-29. +- Added support to put messages with messagettl = -1, and lifted the cap for message time-to-live value. +- Now compatible with Openssl 1.1.0. +- Upgraded the version of dependency Casablanca to 2.10.3. + +Changes in v5.0.0 +- Dropped the support for Nuget package with name 'wastorage', now this client library only release with Nuget name 'Microsoft.Azure.Storage.CPP'. +- Added support for specifying installation destination when compiling with cmake to resolve feature request #183. +- Enabled CMake on Windows. +- Added check for metadata name that is empty or contains whitespaces to fail fast, and trimmed beginning/trailing whitespaces for metadata value. +- Resolved an issue where partial xml body will not throw exception when being parsed. +- Resolved an issue where retry for Table's batch operation always returns the first response. + +Changes in v4.0.0: +- Fixed an issue where blob names that only contains space cannot be listed properly. +- Added more compiler setting to unblock ApiScanning. +- Added a new `nuspec` set that will release `Microsoft.Azure.Storage.CPP` and modified the `wastorage`'s nuspec to depend on Microsoft.Azure.Storage.CPP. Note that 4.0.0 will be the last version of `wastorage`. We strongly advice our user to use Nuget package `Microsoft.Azure.Storage.CPP` start from 4.0.0 +- Resolved an issue where 3rd attempt of retry to download a blob or file will write more data than expected to destination stream. +- Removed dependency on `libxml++` for Linux platforms. But now this SDK depends on `libxml2`, which is available for most Linux distributions. +- `azure::storage::file::upload_properties` and `azure::storage::file::upload_properties_async` will no longer resize a file, even when properties contains length that is being set to 0 or other values. + +Changes in v3.2.1: +- Added tag `true` in .nuspec files and modified the author to `Microsoft` to comply to the new Microsoft Nuget package rule set. Note that this will require the Nuget package user to accept the license manually when newly installing or upgrading 3.2.1 version (and on) of this client library. + +Changes in v3.2: +- Default REST API version is 2017-04-17. +- Added support for blob archive feature, which includes following changes: + - Added two new APIs for `cloud_blob` to support copying a blob to a new destination setting premium access tier, note that currently, only premium page blobs are supported so these two APIs should only be used for page blobs. + - `utility::string_t start_copy(const web::http::uri& source, const azure::storage::premium_blob_tier tier, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context)`. + - `pplx::task start_copy_async(const web::http::uri& source, const premium_blob_tier tier, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context)`. + - Added two new APIs for block blob to support setting the standard blob tier. + - `void set_standard_blob_tier(const standard_blob_tier tier, const access_condition & condition, const blob_request_options & options, operation_context context)` + - `pplx::task set_standard_blob_tier_async(const standard_blob_tier tier, const access_condition & condition, const blob_request_options & options, operation_context context)` + - Added two new APIs for page blob to support setting the premium blob tier. + - `void set_premium_blob_tier(const premium_blob_tier tier, const access_condition & condition, const blob_request_options & options, operation_context context)` + - `pplx::task set_premium_blob_tier_async(const premium_blob_tier tier, const access_condition & condition, const blob_request_options & options, operation_context context)` + - `cloud_blob_properties` now contains 4 more attributes: ` standard_blob_tier`, `premium_blob_tier`, `archive_status`, `access_tier_inferred`. They will be updated using `download_attributes`, and will be set to server returned value when calling `list_blobs`. + - Added two new APIs to support creating page blob with premium access tier. + - `pplx::task create_async(utility::size64_t size, const premium_blob_tier tier, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context)` + - `void create(utility::size64_t size, const premium_blob_tier tier, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context)` + +Changes in v3.1: +- #136 Fixed error in get blob properties when value in x-ms-copy-source is not encoded correctly. +- #124 Fixed the bug in parallel download when offset not zero. +- #123 Fixed the bug that client will overwrite default retry poilcy. +- Fixed #132 build issue caused by macro min +- Throwed exception to warn on the conflict between primary_only and download_services_stats +- Fixed the bug that file share's ETag not updated after quota resize. +- Fixed the bug that file directory's LMT not updated after upload metadata. +- Fixed the bug that file's LMT not updated after upload metadata. + +Changes in v3.0: +- Default Rest API version is 2016-05-31. +- Supported large block size to 100MB, single blob upload threshold to 256MB. +- Add cloud_blob_container_properties::public_access for public access level of container. The value will be populated in: + - cloud_blob_client::list_containers + - cloud_blob_container::create + - cloud_blob_container::download_attributes + - cloud_blob_container::download_permissions +- Message information including the pop receipt will now be populated to the pass-in message in cloud_queue::add_message. +- API cloud_file_directory::list_files_and_directories now accepts a new parameter that limits the listing to a specified prefix. +- All table APIs now accept and enforce the timeout query parameter. +- Value of cloud_blob_properties::content_md5 for stored Content-MD5 property will also be populated in cloud_blob::download_range_to_stream. +- Add cloud_page_blob::start_incremental_copy to support incremental copying a snapshot of the source page blob to a destination page blob. +- Using If-None-Match: * will now fail when reading a blob. +- Include empty headers when signing request. +- Fixed the bug that might cause "invalid handle" exception during retry for download to stream APIs. +- Fixed the issuse that does not work with v141 toolset. +- Fixed the build issue for MFC/ATL projects caused by macro "max". +- Changed constant strings' type from * to []. +- Fixed compile error when _MSC_VER=1810. +- Use <> instead of "" to include package headers. +- Rename TargetName for Debug configuration from wastorage to wastoraged. + Changes in v2.6: - Supported parallel download for blobs and files - Supported installation from Vcpkg diff --git a/Doxyfile b/Doxyfile index 0501ed8b..63213ae4 100644 --- a/Doxyfile +++ b/Doxyfile @@ -38,7 +38,7 @@ PROJECT_NAME = "Microsoft Azure Storage Client Library for C++" # could be handy for archiving the generated documentation or if some version # control system is used. -PROJECT_NUMBER = 2.6.0 +PROJECT_NUMBER = 7.5.0 # Using the PROJECT_BRIEF tag one can provide an optional one line description # for a project that appears at the top of each page and should give viewer a diff --git a/Microsoft.WindowsAzure.Storage.v120.sln b/Microsoft.WindowsAzure.Storage.v120.sln deleted file mode 100644 index 737efa7a..00000000 --- a/Microsoft.WindowsAzure.Storage.v120.sln +++ /dev/null @@ -1,28 +0,0 @@ - -Microsoft Visual Studio Solution File, Format Version 12.00 -# Visual Studio 2013 -VisualStudioVersion = 12.0.40629.0 -MinimumVisualStudioVersion = 10.0.40219.1 -Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "Microsoft.WindowsAzure.Storage.v120", "Microsoft.WindowsAzure.Storage\Microsoft.WindowsAzure.Storage.v120.vcxproj", "{DCFF75B0-B142-4EC8-992F-3E48F2E3EECE}" -EndProject -Global - GlobalSection(SolutionConfigurationPlatforms) = preSolution - Debug|Win32 = Debug|Win32 - Debug|x64 = Debug|x64 - Release|Win32 = Release|Win32 - Release|x64 = Release|x64 - EndGlobalSection - GlobalSection(ProjectConfigurationPlatforms) = postSolution - {DCFF75B0-B142-4EC8-992F-3E48F2E3EECE}.Debug|Win32.ActiveCfg = Debug|Win32 - {DCFF75B0-B142-4EC8-992F-3E48F2E3EECE}.Debug|Win32.Build.0 = Debug|Win32 - {DCFF75B0-B142-4EC8-992F-3E48F2E3EECE}.Debug|x64.ActiveCfg = Debug|x64 - {DCFF75B0-B142-4EC8-992F-3E48F2E3EECE}.Debug|x64.Build.0 = Debug|x64 - {DCFF75B0-B142-4EC8-992F-3E48F2E3EECE}.Release|Win32.ActiveCfg = Release|Win32 - {DCFF75B0-B142-4EC8-992F-3E48F2E3EECE}.Release|Win32.Build.0 = Release|Win32 - {DCFF75B0-B142-4EC8-992F-3E48F2E3EECE}.Release|x64.ActiveCfg = Release|x64 - {DCFF75B0-B142-4EC8-992F-3E48F2E3EECE}.Release|x64.Build.0 = Release|x64 - EndGlobalSection - GlobalSection(SolutionProperties) = preSolution - HideSolutionNode = FALSE - EndGlobalSection -EndGlobal diff --git a/Microsoft.WindowsAzure.Storage.v141.sln b/Microsoft.WindowsAzure.Storage.v141.sln new file mode 100644 index 00000000..f24e24a6 --- /dev/null +++ b/Microsoft.WindowsAzure.Storage.v141.sln @@ -0,0 +1,32 @@ + +Microsoft Visual Studio Solution File, Format Version 12.00 +# Visual Studio 15 +VisualStudioVersion = 15.0.28307.421 +MinimumVisualStudioVersion = 10.0.40219.1 +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "Microsoft.WindowsAzure.Storage.v141", "Microsoft.WindowsAzure.Storage\Microsoft.WindowsAzure.Storage.v141.vcxproj", "{25D342C3-6CDA-44DD-A16A-32A19B692785}" +EndProject +Global + GlobalSection(SolutionConfigurationPlatforms) = preSolution + Debug|x64 = Debug|x64 + Debug|x86 = Debug|x86 + Release|x64 = Release|x64 + Release|x86 = Release|x86 + EndGlobalSection + GlobalSection(ProjectConfigurationPlatforms) = postSolution + {25D342C3-6CDA-44DD-A16A-32A19B692785}.Debug|x64.ActiveCfg = Debug|x64 + {25D342C3-6CDA-44DD-A16A-32A19B692785}.Debug|x64.Build.0 = Debug|x64 + {25D342C3-6CDA-44DD-A16A-32A19B692785}.Debug|x86.ActiveCfg = Debug|Win32 + {25D342C3-6CDA-44DD-A16A-32A19B692785}.Debug|x86.Build.0 = Debug|Win32 + {25D342C3-6CDA-44DD-A16A-32A19B692785}.Release|x64.ActiveCfg = Release|x64 + {25D342C3-6CDA-44DD-A16A-32A19B692785}.Release|x64.Build.0 = Release|x64 + {25D342C3-6CDA-44DD-A16A-32A19B692785}.Release|x86.ActiveCfg = Release|Win32 + {25D342C3-6CDA-44DD-A16A-32A19B692785}.Release|x86.Build.0 = Release|Win32 + EndGlobalSection + GlobalSection(SolutionProperties) = preSolution + HideSolutionNode = FALSE + EndGlobalSection + GlobalSection(ExtensibilityGlobals) = postSolution + SolutionGuid = {EFC9951D-9CFA-4AF9-8EA2-0DE2E2A03DF5} + SolutionGuid = {657A09C9-C87D-40F4-B9C2-A41DE55BB7A2} + EndGlobalSection +EndGlobal diff --git a/Microsoft.WindowsAzure.Storage/CMakeLists.txt b/Microsoft.WindowsAzure.Storage/CMakeLists.txt index 258a0387..ac9e65d8 100644 --- a/Microsoft.WindowsAzure.Storage/CMakeLists.txt +++ b/Microsoft.WindowsAzure.Storage/CMakeLists.txt @@ -1,12 +1,18 @@ set(CMAKE_LEGACY_CYGWIN_WIN32 0) -cmake_minimum_required(VERSION 2.6) + +if(WIN32) + cmake_minimum_required(VERSION 3.8) +else() + cmake_minimum_required(VERSION 2.6) +endif() + project(azurestorage) enable_testing() set(WARNINGS) -set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_SOURCE_DIR}/cmake/Modules/") +set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_CURRENT_LIST_DIR}/cmake/Modules/") option(BUILD_TESTS "Build test codes" OFF) option(BUILD_SAMPLES "Build sample codes" OFF) @@ -19,39 +25,47 @@ if(UNIX) # Prefer a homebrew version of OpenSSL over the one in /usr/lib file(GLOB OPENSSL_ROOT_DIR /usr/local/Cellar/openssl/*) - # Prefer the latest (make the latest one first) + # Prefer the latest (make the latest one first) list(REVERSE OPENSSL_ROOT_DIR) - # There is a dependency chain Libxml++ -> glibmm -> gobject -> glib -> lintl, however, for some reason, - # with homebrew at least, the -L for where lintl resides is left out. So, we try to find it where homebrew - # would put it, or allow the user to specify it - if(NOT GETTEXT_LIB_DIR) - message(WARNING "No GETTEXT_LIB_DIR specified, assuming: /usr/local/opt/gettext/lib") - set(GETTEXT_LIB_DIR "/usr/local/opt/gettext/lib") - endif() - # If we didn't find it where homebrew would put it, and it hasn't been specified, then we have to throw an error - if(NOT IS_DIRECTORY "${GETTEXT_LIB_DIR}") - message(ERROR "We couldn't find your gettext lib directory (${GETTEXT_LIB_DIR}). Please re-run cmake with -DGETTEXT_LIB_DIR=. This is usually where libintl.a and libintl.dylib reside.") - endif() - - # if we actually have a GETTEXT_LIB_DIR we add the linker flag for it - set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -L${GETTEXT_LIB_DIR}") + + if(NOT GETTEXT_LIB_DIR) + message(WARNING "No GETTEXT_LIB_DIR specified, assuming: /usr/local/opt/gettext/lib") + set(GETTEXT_LIB_DIR "/usr/local/opt/gettext/lib") + endif() + # If we didn't find it where homebrew would put it, and it hasn't been specified, then we have to throw an error + if(NOT IS_DIRECTORY "${GETTEXT_LIB_DIR}") + message(ERROR "We couldn't find your gettext lib directory (${GETTEXT_LIB_DIR}). Please re-run cmake with -DGETTEXT_LIB_DIR=. This is usually where libintl.a and libintl.dylib reside.") + endif() + + # if we actually have a GETTEXT_LIB_DIR we add the linker flag for it + set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -L${GETTEXT_LIB_DIR}") endif() set(_OPENSSL_VERSION "") find_package(OpenSSL 1.0.0 REQUIRED) - find_package(Glibmm REQUIRED) - find_package(LibXML++ REQUIRED) find_package(UUID REQUIRED) find_package(Casablanca REQUIRED) + find_package(LibXML2 REQUIRED) if(BUILD_TESTS) find_package(UnitTest++ REQUIRED) endif() + +elseif(WIN32) + message("-- Setting WIN32 options") + find_package(Casablanca REQUIRED) + add_definitions(-DUNICODE -D_UNICODE -D_WIN32) +else() + message("-- Unsupported Build Platform.") +endif() + +if(WIN32 OR UNIX) option(BUILD_SHARED_LIBS "Build shared Libraries." ON) + option(WASTORE_INSTALL_HEADERS "Install header files." ON) file(GLOB WAS_HEADERS includes/was/*.h) install(FILES ${WAS_HEADERS} DESTINATION include/was) @@ -59,12 +73,10 @@ if(UNIX) install(FILES ${WASCORE_HEADERS} DESTINATION include/wascore) file(GLOB WASCORE_DATA includes/wascore/*.dat) install(FILES ${WASCORE_DATA} DESTINATION include/wascore) -else() - message("-- Unsupported Build Platform.") endif() # Compiler (not platform) specific settings -if("${CMAKE_CXX_COMPILER_ID}" MATCHES "GNU") +if(CMAKE_CXX_COMPILER_ID MATCHES "GNU") message("-- Setting gcc options") set(WARNINGS "-Wall -Wextra -Wunused-parameter -Wcast-align -Wcast-qual -Wconversion -Wformat=2 -Winit-self -Winvalid-pch -Wmissing-format-attribute -Wmissing-include-dirs -Wpacked -Wredundant-decls -Wunreachable-code") @@ -81,22 +93,45 @@ if("${CMAKE_CXX_COMPILER_ID}" MATCHES "GNU") add_definitions(-DBOOST_LOG_DYN_LINK) endif() add_definitions(-D_TURN_OFF_PLATFORM_STRING) -elseif((CMAKE_CXX_COMPILER_ID MATCHES "Clang")) - message("-- Setting clang options") - - set(WARNINGS "-Wall -Wextra -Wcast-qual -Wconversion -Wformat=2 -Winit-self -Winvalid-pch -Wmissing-format-attribute -Wmissing-include-dirs -Wpacked -Wredundant-decls") - set(OSX_SUPPRESSIONS "-Wno-overloaded-virtual -Wno-sign-conversion -Wno-deprecated -Wno-unknown-pragmas -Wno-reorder -Wno-char-subscripts -Wno-switch -Wno-unused-parameter -Wno-unused-variable -Wno-deprecated -Wno-unused-value -Wno-unknown-warning-option -Wno-return-type-c-linkage -Wno-unused-function -Wno-sign-compare -Wno-shorten-64-to-32 -Wno-reorder -Wno-unused-local-typedefs") - set(WARNINGS "${WARNINGS} ${OSX_SUPPRESSIONS}") - - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -stdlib=libc++ -Wno-return-type-c-linkage -Wno-unneeded-internal-declaration") - set(CMAKE_XCODE_ATTRIBUTE_CLANG_CXX_LIBRARY "libc++") - set(CMAKE_XCODE_ATTRIBUTE_CLANG_CXX_LANGUAGE_STANDARD "c++11") - - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -fno-strict-aliasing") - if (BUILD_SHARED_LIBS) - add_definitions(-DBOOST_LOG_DYN_LINK) - endif() - add_definitions(-D_TURN_OFF_PLATFORM_STRING) +elseif(CMAKE_CXX_COMPILER_ID MATCHES "Clang") + message("-- Setting clang options") + + set(WARNINGS "-Wall -Wextra -Wcast-qual -Wconversion -Wformat=2 -Winit-self -Winvalid-pch -Wmissing-format-attribute -Wmissing-include-dirs -Wpacked -Wredundant-decls") + set(OSX_SUPPRESSIONS "-Wno-overloaded-virtual -Wno-sign-conversion -Wno-deprecated -Wno-unknown-pragmas -Wno-reorder -Wno-char-subscripts -Wno-switch -Wno-unused-parameter -Wno-unused-variable -Wno-deprecated -Wno-unused-value -Wno-unknown-warning-option -Wno-return-type-c-linkage -Wno-unused-function -Wno-sign-compare -Wno-shorten-64-to-32 -Wno-reorder -Wno-unused-local-typedefs") + set(WARNINGS "${WARNINGS} ${OSX_SUPPRESSIONS}") + + set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -stdlib=libc++ -Wno-return-type-c-linkage -Wno-unneeded-internal-declaration") + set(CMAKE_XCODE_ATTRIBUTE_CLANG_CXX_LIBRARY "libc++") + set(CMAKE_XCODE_ATTRIBUTE_CLANG_CXX_LANGUAGE_STANDARD "c++11") + + set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -fno-strict-aliasing") + if (BUILD_SHARED_LIBS) + add_definitions(-DBOOST_LOG_DYN_LINK) + endif() + add_definitions(-D_TURN_OFF_PLATFORM_STRING) +elseif(CMAKE_CXX_COMPILER_ID MATCHES "MSVC") + message("-- Setting MSVC options") + add_compile_options(/bigobj) + add_compile_options(/MP) + if(BUILD_SHARED_LIBS) + add_definitions(-DWASTORAGE_DLL -D_USRDLL) + else() + add_definitions(-D_NO_WASTORAGE_API) + endif() + + if(ENABLE_MT) + set(CompilerFlags + CMAKE_CXX_FLAGS + CMAKE_CXX_FLAGS_DEBUG + CMAKE_CXX_FLAGS_RELEASE + CMAKE_C_FLAGS + CMAKE_C_FLAGS_DEBUG + CMAKE_C_FLAGS_RELEASE + ) + foreach(CompilerFlag ${CompilerFlags}) + string(REPLACE "/MD" "/MT" ${CompilerFlag} "${${CompilerFlag}}") + endforeach() + endif() else() message("-- Unknown compiler, success is doubtful.") endif() @@ -107,17 +142,26 @@ set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/Binaries) set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/Binaries) set(AZURESTORAGE_INCLUDE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/includes) -set(AZURESTORAGE_INCLUDE_DIRS ${CMAKE_CURRENT_SOURCE_DIR}/includes ${CASABLANCA_INCLUDE_DIRS} ${Boost_INCLUDE_DIRS} ${OPENSSL_INCLUDE_DIRS} ${LibXML++_INCLUDE_DIRS} ${UUID_INCLUDE_DIRS} ${Glibmm_INCLUDE_DIRS}) +set(AZURESTORAGE_INCLUDE_DIRS ${CMAKE_CURRENT_SOURCE_DIR}/includes ${CASABLANCA_INCLUDE_DIR} ${Boost_INCLUDE_DIRS} ${OPENSSL_INCLUDE_DIRS} ${UUID_INCLUDE_DIRS} ${LibXML2_INCLUDE_DIR}) set(AZURESTORAGE_LIBRARY azurestorage) -set(AZURESTORAGE_LIBRARIES ${AZURESTORAGE_LIBRARY} ${CASABLANCA_LIBRARIES} ${Boost_LIBRARIES} ${Boost_FRAMEWORK} ${OPENSSL_LIBRARIES} ${LibXML++_LIBRARIES} ${UUID_LIBRARIES} ${Glibmm_LIBRARIES}) +set(AZURESTORAGE_LIBRARIES ${AZURESTORAGE_LIBRARY} ${CASABLANCA_LIBRARY} ${Boost_LIBRARIES} ${Boost_FRAMEWORK} ${OPENSSL_LIBRARIES} ${UUID_LIBRARIES} ${LibXML2_LIBRARY} ${CMAKE_THREAD_LIBS_INIT}) # Set version numbers centralized -set (AZURESTORAGE_VERSION_MAJOR 2) -set (AZURESTORAGE_VERSION_MINOR 6) +set (AZURESTORAGE_VERSION_MAJOR 7) +set (AZURESTORAGE_VERSION_MINOR 5) set (AZURESTORAGE_VERSION_REVISION 0) +# Set output directories. +if(NOT DEFINED CMAKE_INSTALL_BINDIR) + set(CMAKE_INSTALL_BINDIR bin) +endif() + +if(NOT DEFINED CMAKE_INSTALL_LIBDIR) + set(CMAKE_INSTALL_LIBDIR lib) +endif() + # Add sources per configuration add_subdirectory(src) @@ -127,5 +171,7 @@ if(BUILD_TESTS) endif() if(BUILD_SAMPLES) + set(AZUREAZURAGE_LIBRARY_SAMPLE azurestoragesample) add_subdirectory(samples) endif() + diff --git a/Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v140.vcxproj b/Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v140.vcxproj index a7063946..15e58bef 100644 --- a/Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v140.vcxproj +++ b/Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v140.vcxproj @@ -68,13 +68,13 @@ true - wastorage + wastoraged $(ProjectDir)$(PlatformToolset)\$(Platform)\$(Configuration)\ $(PlatformToolset)\$(Platform)\$(Configuration)\ true - wastorage + wastoraged $(ProjectDir)$(PlatformToolset)\$(Platform)\$(Configuration)\ $(PlatformToolset)\$(Platform)\$(Configuration)\ @@ -98,7 +98,7 @@ true /we4100 /Zm186 %(AdditionalOptions) /bigobj true - includes + includes;%(AdditionalIncludeDirectories) true false @@ -116,6 +116,9 @@ + + _UNICODE;UNICODE;_DEBUG;%(PreprocessorDefinitions) + @@ -125,6 +128,9 @@ + + _UNICODE;UNICODE;_DEBUG;%(PreprocessorDefinitions) + @@ -134,10 +140,13 @@ true WASTORAGE_DLL;WIN32;NDEBUG;_TURN_OFF_PLATFORM_STRING;_WINDOWS;_USRDLL;%(PreprocessorDefinitions) false + /Zi /GF /Gy %(AdditionalOptions) + Guard true true + /debug /debugtype:cv,fixup /incremental:no /opt:ref /opt:icf %(AdditionalOptions) @@ -148,19 +157,25 @@ true WASTORAGE_DLL;WIN32;NDEBUG;_TURN_OFF_PLATFORM_STRING;_WINDOWS;_USRDLL;%(PreprocessorDefinitions) false + /Zi /GF /Gy %(AdditionalOptions) + Guard true true + /debug /debugtype:cv,fixup /incremental:no /opt:ref /opt:icf %(AdditionalOptions) + + + @@ -188,6 +203,9 @@ + + + @@ -247,22 +265,15 @@ Create + - - - - - This project references NuGet package(s) that are missing on this computer. Use NuGet Package Restore to download them. For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}. - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v140.vcxproj.filters b/Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v140.vcxproj.filters index 8f933db8..f9b66d1e 100644 --- a/Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v140.vcxproj.filters +++ b/Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v140.vcxproj.filters @@ -108,6 +108,15 @@ Header Files + + Header Files + + + Header Files + + + Header Files + @@ -272,6 +281,18 @@ Source Files + + Source Files + + + Source Files + + + Source Files + + + Source Files + @@ -279,7 +300,6 @@ - Header Files diff --git a/Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v120.vcxproj b/Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v141.vcxproj similarity index 87% rename from Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v120.vcxproj rename to Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v141.vcxproj index 378ee1f4..126c8e56 100644 --- a/Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v120.vcxproj +++ b/Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v141.vcxproj @@ -1,5 +1,5 @@  - + Debug @@ -19,7 +19,7 @@ - {DCFF75B0-B142-4EC8-992F-3E48F2E3EECE} + {A8E200A6-910E-44F4-9E8E-C37E45B7AD42} Win32Proj MicrosoftWindowsAzureStorage @@ -27,26 +27,26 @@ DynamicLibrary true - v120 + v141 Unicode DynamicLibrary true - v120 + v141 Unicode DynamicLibrary false - v120 + v141 true Unicode DynamicLibrary false - v120 + v141 true Unicode @@ -68,13 +68,13 @@ true - wastorage + wastoraged $(ProjectDir)$(PlatformToolset)\$(Platform)\$(Configuration)\ $(PlatformToolset)\$(Platform)\$(Configuration)\ true - wastorage + wastoraged $(ProjectDir)$(PlatformToolset)\$(Platform)\$(Configuration)\ $(PlatformToolset)\$(Platform)\$(Configuration)\ @@ -96,9 +96,9 @@ Use true - /we4100 /Zm150 %(AdditionalOptions) /bigobj + /we4100 /Zm186 %(AdditionalOptions) /bigobj true - includes + includes;%(AdditionalIncludeDirectories) true false @@ -116,6 +116,9 @@ + + _UNICODE;UNICODE;_DEBUG;%(PreprocessorDefinitions) + @@ -125,6 +128,9 @@ + + _UNICODE;UNICODE;_DEBUG;%(PreprocessorDefinitions) + @@ -134,10 +140,13 @@ true WASTORAGE_DLL;WIN32;NDEBUG;_TURN_OFF_PLATFORM_STRING;_WINDOWS;_USRDLL;%(PreprocessorDefinitions) false + /Zi /GF /Gy %(AdditionalOptions) + Guard true true + /debug /debugtype:cv,fixup /incremental:no /opt:ref /opt:icf %(AdditionalOptions) @@ -148,19 +157,25 @@ true WASTORAGE_DLL;WIN32;NDEBUG;_TURN_OFF_PLATFORM_STRING;_WINDOWS;_USRDLL;%(PreprocessorDefinitions) false + /Zi /GF /Gy %(AdditionalOptions) + Guard true true + /debug /debugtype:cv,fixup /incremental:no /opt:ref /opt:icf %(AdditionalOptions) + + + @@ -188,12 +203,14 @@ - - + + + + @@ -203,6 +220,7 @@ + @@ -228,8 +246,8 @@ - + @@ -247,22 +265,15 @@ Create + - - - - - This project references NuGet package(s) that are missing on this computer. Use NuGet Package Restore to download them. For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}. - - - - + \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v120.vcxproj.filters b/Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v141.vcxproj.filters similarity index 93% rename from Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v120.vcxproj.filters rename to Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v141.vcxproj.filters index 64f14fc5..f9b66d1e 100644 --- a/Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v120.vcxproj.filters +++ b/Microsoft.WindowsAzure.Storage/Microsoft.WindowsAzure.Storage.v141.vcxproj.filters @@ -78,6 +78,9 @@ Header Files + + Header Files + Header Files @@ -99,15 +102,21 @@ Header Files - - Header Files - Header Files Header Files + + Header Files + + + Header Files + + + Header Files + @@ -122,6 +131,9 @@ Source Files + + Source Files + Source Files @@ -239,37 +251,46 @@ Source Files - + Source Files - + Source Files - + Source Files - + Source Files - + Source Files - + Source Files Source Files - + Source Files - + Source Files - + Source Files - + + Source Files + + + Source Files + + + Source Files + + Source Files @@ -279,7 +300,6 @@ - Header Files diff --git a/Microsoft.WindowsAzure.Storage/cmake/Modules/FindCasablanca.cmake b/Microsoft.WindowsAzure.Storage/cmake/Modules/FindCasablanca.cmake index 5c1df3c4..1e1a65a5 100644 --- a/Microsoft.WindowsAzure.Storage/cmake/Modules/FindCasablanca.cmake +++ b/Microsoft.WindowsAzure.Storage/cmake/Modules/FindCasablanca.cmake @@ -7,26 +7,39 @@ find_package(PkgConfig) include(LibFindMacros) -# Include dir -find_path(CASABLANCA_INCLUDE_DIR - NAMES - cpprest/http_client.h - PATHS - ${CASABLANCA_PKGCONF_INCLUDE_DIRS} - ${CASABLANCA_DIR} - $ENV{CASABLANCA_DIR} - /usr/local/include - /usr/include - ../../casablanca - PATH_SUFFIXES - Release/include - include -) +if(WIN32) + find_package(cpprestsdk) + + if(cpprestsdk_FOUND) + set(CASABLANCA_LIBRARY cpprestsdk::cpprest) + set(CASABLANCA_PROCESS_LIBS CASABLANCA_LIBRARY) + set(CASABLANCA_PROCESS_INCLUDES CASABLANCA_INCLUDE_DIR) + libfind_process(CASABLANCA) + return() + endif() +else() + # Include dir + find_path(CASABLANCA_INCLUDE_DIR + NAMES + cpprest/http_client.h + PATHS + ${CASABLANCA_PKGCONF_INCLUDE_DIRS} + ${CASABLANCA_DIR} + $ENV{CASABLANCA_DIR} + /usr/local/include + /usr/include + ../../casablanca + PATH_SUFFIXES + Release/include + include + ) +endif() # Library find_library(CASABLANCA_LIBRARY NAMES cpprest + cpprest_2_9.lib PATHS ${CASABLANCA_PKGCONF_LIBRARY_DIRS} ${CASABLANCA_DIR} diff --git a/Microsoft.WindowsAzure.Storage/cmake/Modules/FindGlibmm.cmake b/Microsoft.WindowsAzure.Storage/cmake/Modules/FindGlibmm.cmake deleted file mode 100644 index 79a4bcb4..00000000 --- a/Microsoft.WindowsAzure.Storage/cmake/Modules/FindGlibmm.cmake +++ /dev/null @@ -1,54 +0,0 @@ -# - Try to find Glibmm-2.4 -# Once done, this will define -# -# Glibmm_FOUND - system has Glibmm -# Glibmm_INCLUDE_DIRS - the Glibmm include directories -# Glibmm_LIBRARIES - link these to use Glibmm - -include(LibFindMacros) - -# Dependencies -libfind_package(Glibmm Glib) -libfind_package(Glibmm SigC++) - -# Use pkg-config to get hints about paths -libfind_pkg_check_modules(Glibmm_PKGCONF glibmm-2.4) - -# Main include dir -find_path(Glibmm_INCLUDE_DIR - NAMES glibmm/main.h - PATHS - ${Glibmm_PKGCONF_INCLUDE_DIRS} - /usr/include - PATH_SUFFIXES - glibmm-2.4 -) - -# Glib-related libraries also use a separate config header, which is in lib dir -find_path(GlibmmConfig_INCLUDE_DIR - NAMES glibmmconfig.h - PATHS - ${Glibmm_PKGCONF_INCLUDE_DIRS} - /usr - /usr/x86_64-linux-gnu - PATH_SUFFIXES - lib/glibmm-2.4/include -) - -find_library(Glibmm_LIBRARY - NAMES glibmm-2.4 - PATHS - ${Glibmm_PKGCONF_LIBRARY_DIRS} - /usr - /usr/x86_64-linux-gnu - PATH_SUFFIXES - lib -) - -# libfind_library(Glibmm glibmm 2.4) - -# Set the include dir variables and the libraries and let libfind_process do the rest. -# NOTE: Singular variables for this library, plural for libraries this this lib depends on. -set(Glibmm_PROCESS_INCLUDES Glibmm_INCLUDE_DIR GlibmmConfig_INCLUDE_DIR) -set(Glibmm_PROCESS_LIBS Glibmm_LIBRARY) -libfind_process(Glibmm) diff --git a/Microsoft.WindowsAzure.Storage/cmake/Modules/FindLibXML++.cmake b/Microsoft.WindowsAzure.Storage/cmake/Modules/FindLibXML++.cmake deleted file mode 100644 index 9f1af8c2..00000000 --- a/Microsoft.WindowsAzure.Storage/cmake/Modules/FindLibXML++.cmake +++ /dev/null @@ -1,58 +0,0 @@ -# find libxml++ -# -# exports: -# -# LibXML++_FOUND -# LibXML++_INCLUDE_DIRS -# LibXML++_LIBRARIES -# - -include(LibFindMacros) - -# Use pkg-config to get hints about paths -libfind_pkg_check_modules(LibXML++_PKGCONF QUIET libxml++-2.6) - -# Include dir -find_path(LibXML++_INCLUDE_DIR - NAMES libxml++/libxml++.h - PATHS - ${LibXML++_PKGCONF_INCLUDE_DIRS} - ${LibXML++_ROOT_DIR} - /usr - PATH_SUFFIXES - include/libxml++-2.6 -) - -# Config Include dir -find_path(LibXML++Config_INCLUDE_DIR - NAMES libxml++config.h - PATHS - ${LibXML++_PKGCONF_INCLUDE_DIRS} - ${LibXML++_ROOT_DIR} - /usr - PATH_SUFFIXES - lib/libxml++-2.6/include -) - -# Finally the library itself -find_library(LibXML++_LIBRARY - NAMES xml++ xml++-2.6 - PATHS - ${LibXML++_PKGCONF_LIBRARY_DIRS} - ${LibXML++_ROOT_DIR} - /usr - PATH_SUFFIXES - lib -) - -FIND_PACKAGE_HANDLE_STANDARD_ARGS(LibXML++ DEFAULT_MSG LibXML++_LIBRARY LibXML++_INCLUDE_DIR) - -if(LibXML++_INCLUDE_DIR AND LibXML++_LIBRARY) - set(LibXML++_LIBRARIES ${LibXML++_LIBRARY} ${LibXML++_PKGCONF_LIBRARIES}) - set(LibXML++_INCLUDE_DIRS ${LibXML++_INCLUDE_DIR} ${LibXML++Config_INCLUDE_DIR} ${LibXML++_PKGCONF_INCLUDE_DIRS}) - set(LibXML++_FOUND yes) -else() - set(LibXML++_LIBRARIES) - set(LibXML++_INCLUDE_DIRS) - set(LibXML++_FOUND no) -endif() diff --git a/Microsoft.WindowsAzure.Storage/cmake/Modules/FindLibXML2.cmake b/Microsoft.WindowsAzure.Storage/cmake/Modules/FindLibXML2.cmake index d1413be6..085c9428 100644 --- a/Microsoft.WindowsAzure.Storage/cmake/Modules/FindLibXML2.cmake +++ b/Microsoft.WindowsAzure.Storage/cmake/Modules/FindLibXML2.cmake @@ -14,14 +14,19 @@ # Variables defined by this module: # # LIBXML2_FOUND System has LibXML2 libs/headers -# LibXML2_LIBRARIES The LibXML2 libraries +# LibXML2_LIBRARY The LibXML2 libraries # LibXML2_INCLUDE_DIR The location of LibXML2 headers +include(LibFindMacros) + +# Use pkg-config to get hints about paths +libfind_pkg_check_modules(LibXML2_PKGCONF libxml2) + find_path(LibXML2_ROOT_DIR NAMES include/libxml2/libxml/tree.h ) -find_library(LibXML2_LIBRARIES +find_library(LibXML2_LIBRARY NAMES xml2 HINTS ${LibXML2_ROOT_DIR}/lib ) @@ -31,14 +36,7 @@ find_path(LibXML2_INCLUDE_DIR HINTS ${LibXML2_ROOT_DIR}/include/libxml2 ) -include(FindPackageHandleStandardArgs) -find_package_handle_standard_args(LibXML2 DEFAULT_MSG - LibXML2_LIBRARIES - LibXML2_INCLUDE_DIR -) +set(LibXML2_PROCESS_LIBS LibXML2_LIBRARY) +set(LibXML2_PROCESS_INCLUDES LibXML2_INCLUDE_DIR) -mark_as_advanced( - LibXML2_ROOT_DIR - LibXML2_LIBRARIES - LibXML2_INCLUDE_DIR -) +libfind_process(LibXML2) diff --git a/Microsoft.WindowsAzure.Storage/cmake/Modules/FindUUID.cmake b/Microsoft.WindowsAzure.Storage/cmake/Modules/FindUUID.cmake index 9171f8c8..a427288a 100644 --- a/Microsoft.WindowsAzure.Storage/cmake/Modules/FindUUID.cmake +++ b/Microsoft.WindowsAzure.Storage/cmake/Modules/FindUUID.cmake @@ -63,6 +63,12 @@ else (UUID_LIBRARIES AND UUID_INCLUDE_DIRS) /usr/freeware/lib64 ) + if (APPLE) + if (NOT UUID_LIBRARY) + set(UUID_LIBRARY "") + endif (NOT UUID_LIBRARY) + endif (APPLE) + find_library(UUID_LIBRARY_DEBUG NAMES uuidd @@ -88,9 +94,9 @@ else (UUID_LIBRARIES AND UUID_INCLUDE_DIRS) set(UUID_INCLUDE_DIRS ${UUID_INCLUDE_DIR}) set(UUID_LIBRARIES ${UUID_LIBRARY}) - if (UUID_INCLUDE_DIRS AND UUID_LIBRARIES) + if (UUID_INCLUDE_DIRS AND (APPLE OR UUID_LIBRARIES)) set(UUID_FOUND TRUE) - endif (UUID_INCLUDE_DIRS AND UUID_LIBRARIES) + endif (UUID_INCLUDE_DIRS AND (APPLE OR UUID_LIBRARIES)) if (UUID_FOUND) if (NOT UUID_FIND_QUIETLY) diff --git a/Microsoft.WindowsAzure.Storage/cmake/Modules/FindUnitTest++.cmake b/Microsoft.WindowsAzure.Storage/cmake/Modules/FindUnitTest++.cmake index be2d3a45..d41d34f5 100644 --- a/Microsoft.WindowsAzure.Storage/cmake/Modules/FindUnitTest++.cmake +++ b/Microsoft.WindowsAzure.Storage/cmake/Modules/FindUnitTest++.cmake @@ -20,6 +20,7 @@ find_path(UnitTest++_INCLUDE_DIR ${UnitTest++_PKGCONF_INCLUDE_DIRS} ${CMAKE_SOURCE_DIR}/tests/UnitTest++/src /usr/local/include + /usr/include PATH_SUFFIXES unittest++ UnitTest++ diff --git a/Microsoft.WindowsAzure.Storage/includes/was/auth.h b/Microsoft.WindowsAzure.Storage/includes/was/auth.h index 11e2b94e..ccfce406 100644 --- a/Microsoft.WindowsAzure.Storage/includes/was/auth.h +++ b/Microsoft.WindowsAzure.Storage/includes/was/auth.h @@ -26,12 +26,13 @@ namespace azure { namespace storage { class cloud_blob_shared_access_headers; class cloud_file_shared_access_headers; class account_shared_access_policy; + struct user_delegation_key; }} // namespace azure::storage namespace azure { namespace storage { namespace protocol { - utility::string_t calculate_hmac_sha256_hash(const utility::string_t& string_to_hash, const storage_credentials& credentials); + utility::string_t calculate_hmac_sha256_hash(const utility::string_t& string_to_hash, const std::vector& key); const utility::string_t auth_name_shared_key(_XPLATSTR("SharedKey")); const utility::string_t auth_name_shared_key_lite(_XPLATSTR("SharedKeyLite")); @@ -437,15 +438,44 @@ namespace azure { namespace storage { namespace protocol { storage_credentials m_credentials; }; + /// + /// A helper class for signing a request with bearer token. + /// + class bearer_token_authentication_handler : public authentication_handler + { + public: + + /// + /// Initializes a new instance of the class. + /// + /// The to use to sign the request. + explicit bearer_token_authentication_handler(storage_credentials credentials) + : m_credentials(std::move(credentials)) + { + } + + /// + /// Sign the specified request for authentication via bearer token. + /// + /// The request to be signed. + /// An object that represents the context for the current operation. + WASTORAGE_API void sign_request(web::http::http_request& request, operation_context context) const override; + + private: + + storage_credentials m_credentials; + }; + #pragma endregion #pragma region Shared Access Signatures utility::string_t get_account_sas_token(const utility::string_t& identifier, const account_shared_access_policy& policy, const storage_credentials& credentials); - utility::string_t get_blob_sas_token(const utility::string_t& identifier, const shared_access_policy& policy, const cloud_blob_shared_access_headers& headers, const utility::string_t& resource_type, const utility::string_t& resource, const storage_credentials& credentials); + utility::string_t get_blob_sas_token(const utility::string_t& identifier, const shared_access_policy& policy, const cloud_blob_shared_access_headers& headers, const utility::string_t& resource_type, const utility::string_t& resource, const utility::string_t& snapshot_time, const storage_credentials& credentials); utility::string_t get_queue_sas_token(const utility::string_t& identifier, const shared_access_policy& policy, const utility::string_t& resource, const storage_credentials& credentials); utility::string_t get_table_sas_token(const utility::string_t& identifier, const shared_access_policy& policy, const utility::string_t& table_name, const utility::string_t& start_partition_key, const utility::string_t& start_row_key, const utility::string_t& end_partition_key, const utility::string_t& end_row_key, const utility::string_t& resource, const storage_credentials& credentials); utility::string_t get_file_sas_token(const utility::string_t& identifier, const shared_access_policy& policy, const cloud_file_shared_access_headers& headers, const utility::string_t& resource_type, const utility::string_t& resource, const storage_credentials& credentials); + utility::string_t get_blob_user_delegation_sas_token(const shared_access_policy& policy, const cloud_blob_shared_access_headers& headers, const utility::string_t& resource_type, const utility::string_t& resource, const utility::string_t& snapshot_time, const user_delegation_key& key); storage_credentials parse_query(const web::http::uri& uri, bool require_signed_resource); #pragma endregion diff --git a/Microsoft.WindowsAzure.Storage/includes/was/blob.h b/Microsoft.WindowsAzure.Storage/includes/was/blob.h index 8a21b390..6c1b3366 100644 --- a/Microsoft.WindowsAzure.Storage/includes/was/blob.h +++ b/Microsoft.WindowsAzure.Storage/includes/was/blob.h @@ -19,6 +19,10 @@ #include "limits" #include "service_client.h" +#include "wascore/timer_handler.h" + +#pragma push_macro("max") +#undef max namespace azure { namespace storage { @@ -44,6 +48,10 @@ namespace azure { namespace storage { namespace core { class cloud_append_blob_ostreambuf; + class basic_cloud_block_blob_ostreambuf; + class basic_cloud_page_blob_ostreambuf; + class basic_cloud_append_blob_ostreambuf; + class basic_cloud_blob_istreambuf; } /// @@ -93,6 +101,74 @@ namespace azure { namespace storage { blob }; + /// + /// Represents account properties for blob service. + /// + class account_properties + { + public: + + /// + /// Initializes a new instance of the class. + /// + account_properties() + { + } + +#if defined(_MSC_VER) && _MSC_VER < 1900 + // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, + // have implicitly-declared move constructor and move assignment operator. + + /// + /// Initializes a new instance of the class based on an existing instance. + /// + /// An existing object. + account_properties(account_properties&& other) + { + *this = std::move(other); + } + + /// + /// Returns a reference to an object. + /// + /// An existing object to use to set properties. + /// An object with properties set. + account_properties& operator=(account_properties&& other) + { + if (this != &other) + { + m_sku_name = std::move(other.m_sku_name); + m_account_kind = std::move(other.m_account_kind); + } + return *this; + } +#endif + + /// + /// Gets the account SKU type based on GeoReplication state. + /// + /// "Standard_LRS", "Standard_ZRS", "Standard_GRS", "Standard_RAGRS", "Premium_LRS", or "Premium_ZRS" + const utility::string_t& sku_name() const + { + return m_sku_name; + } + + /// + /// Gets the account kind. + /// + /// "Storage", "StorageV2", or "BlobStorage" + const utility::string_t& account_kind() const + { + return m_account_kind; + } + + private: + + utility::string_t m_sku_name; + utility::string_t m_account_kind; + friend class protocol::blob_response_parsers; + }; + /// /// Represents a shared access policy, which specifies the start time, expiry time, /// and permissions for a shared access signature for a blob or container. @@ -204,6 +280,11 @@ namespace azure { namespace storage { /// copy = 1 << 3, + /// + /// Include saved versions of blobs. + /// + versions = 1 << 4, + /// /// List all available committed blobs, uncommitted blobs, and snapshots, and return all metadata and copy status for those blobs. /// @@ -755,233 +836,103 @@ namespace azure { namespace storage { }; /// - /// The lease state of a resource. + /// The tier of the block blob on a standard storage account. /// - enum class lease_state + enum class standard_blob_tier { /// - /// The lease state is not specified. - /// - unspecified, - - /// - /// The lease is in the Available state. - /// - available, - - /// - /// The lease is in the Leased state. - /// - leased, - - /// - /// The lease is in the Expired state. - /// - expired, - - /// - /// The lease is in the Breaking state. - /// - breaking, - - /// - /// The lease is in the Broken state. + /// The tier is not recognized by this version of the library /// - broken, - }; + unknown, - /// - /// The lease status of a resource. - /// - enum class lease_status - { /// - /// The lease status is not specified. + /// Hot Storage /// - unspecified, + hot, /// - /// The resource is locked. + /// Cool Storage /// - locked, + cool, /// - /// The resource is available to be locked. + /// Archive Storage /// - unlocked + archive }; /// - /// Specifies the proposed duration of seconds that the lease should continue before it is broken. + /// The tier of the page blob. + /// Please take a look at https://docs.microsoft.com/en-us/azure/storage/storage-premium-storage#scalability-and-performance-targets + /// for detailed information on the corresponding IOPS and throughput per PremiumPageBlobTier. /// - class lease_break_period + enum class premium_blob_tier { - public: /// - /// Initializes a new instance of the class that breaks - /// a fixed-duration lease after the remaining lease period elapses, or breaks an infinite lease immediately. + /// The tier is not recognized by this version of the library /// - lease_break_period() - : m_seconds(std::chrono::seconds::max()) - { - } + unknown, /// - /// Initializes a new instance of the class that breaks - /// a lease after the proposed duration. + /// P4 Tier /// - /// The proposed duration, in seconds, for the lease before it is broken. Value may - /// be between 0 and 60 seconds. - lease_break_period(const std::chrono::seconds& seconds) - : m_seconds(seconds) - { - if (seconds != std::chrono::seconds::max()) - { - utility::assert_in_bounds(_XPLATSTR("seconds"), seconds, protocol::minimum_lease_break_period, protocol::maximum_lease_break_period); - } - } - -#if defined(_MSC_VER) && _MSC_VER < 1900 - // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, - // have implicitly-declared move constructor and move assignment operator. + p4, /// - /// Initializes a new instance of the class based on an existing instance. + /// P6 Tier /// - /// An existing object. - lease_break_period(lease_break_period&& other) - { - *this = std::move(other); - } + p6, /// - /// Returns a reference to an object. + /// P10 Tier /// - /// An existing object to use to set properties. - /// An object with properties set. - lease_break_period& operator=(lease_break_period&& other) - { - if (this != &other) - { - m_seconds = std::move(other.m_seconds); - } - return *this; - } -#endif + p10, /// - /// Indicates whether the object is valid. + /// P20 Tier /// - /// true if the object is valid; otherwise, false. - bool is_valid() const - { - return m_seconds < std::chrono::seconds::max(); - } + p20, /// - /// Gets the proposed duration for the lease before it is broken. + /// P30 Tier /// - /// The proposed proposed duration for the lease before it is broken, in seconds. - const std::chrono::seconds& seconds() const - { - return m_seconds; - } + p30, - private: - - std::chrono::seconds m_seconds; - }; - - /// - /// The lease duration for a Blob service resource. - /// - enum class lease_duration - { /// - /// The lease duration is not specified. + /// P40 Tier /// - unspecified, + p40, /// - /// The lease duration is finite. + /// P50 Tier /// - fixed, + p50, /// - /// The lease duration is infinite. + /// P60 Tier /// - infinite, + p60 }; /// - /// Specifies the duration of the lease. + /// The status of the blob if being re-hydrated. /// - class lease_time + enum class archive_status { - public: - /// - /// Initializes a new instance of the class that never expires. - /// - lease_time() - : m_seconds(-1) - { - } - - /// - /// Initializes a new instance of the class that expires after the - /// specified duration. - /// - /// The duration of the lease in seconds. For a non-infinite lease, this value can be - /// between 15 and 60 seconds. - lease_time(const std::chrono::seconds& seconds) - : m_seconds(seconds) - { - if (seconds.count() != -1) - { - utility::assert_in_bounds(_XPLATSTR("seconds"), seconds, protocol::minimum_fixed_lease_duration, protocol::maximum_fixed_lease_duration); - } - } - -#if defined(_MSC_VER) && _MSC_VER < 1900 - // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, - // have implicitly-declared move constructor and move assignment operator. - /// - /// Initializes a new instance of the class based on an existing instance. + /// The blob's archive status is unknown /// - /// An existing object. - lease_time(lease_time&& other) - { - *this = std::move(other); - } + unknown, /// - /// Returns a reference to an object. + /// The blob is being re-hydrated to hot /// - /// An existing object to use to set properties. - /// An object with properties set. - lease_time& operator=(lease_time&& other) - { - if (this != &other) - { - m_seconds = std::move(other.m_seconds); - } - return *this; - } -#endif + rehydrate_pending_to_hot, /// - /// Gets the duration of the lease in seconds for a non-infinite lease. + /// The blob is being re-hydrated to cool /// - /// The duration of the lease. - const std::chrono::seconds& seconds() const - { - return m_seconds; - } - - private: - - std::chrono::seconds m_seconds; + rehydrate_pending_to_cool }; /// @@ -1072,8 +1023,13 @@ namespace azure { namespace storage { m_lease_state(azure::storage::lease_state::unspecified), m_lease_duration(azure::storage::lease_duration::unspecified), m_page_blob_sequence_number(0), m_append_blob_committed_block_count(0), - m_server_encrypted(false) + m_server_encrypted(false), + m_is_incremental_copy(false) { + m_standard_blob_tier = azure::storage::standard_blob_tier::unknown; + m_premium_blob_tier = azure::storage::premium_blob_tier::unknown; + m_archive_status = azure::storage::archive_status::unknown; + m_access_tier_inferred = false; } #if defined(_MSC_VER) && _MSC_VER < 1900 @@ -1106,6 +1062,7 @@ namespace azure { namespace storage { m_content_md5 = std::move(other.m_content_md5); m_content_type = std::move(other.m_content_type); m_etag = std::move(other.m_etag); + m_encryption_key_sha256 = std::move(other.m_encryption_key_sha256); m_last_modified = std::move(other.m_last_modified); m_type = std::move(other.m_type); m_lease_status = std::move(other.m_lease_status); @@ -1114,6 +1071,13 @@ namespace azure { namespace storage { m_page_blob_sequence_number = std::move(other.m_page_blob_sequence_number); m_append_blob_committed_block_count = std::move(other.m_append_blob_committed_block_count); m_server_encrypted = std::move(other.m_server_encrypted); + m_is_incremental_copy = std::move(other.m_is_incremental_copy); + m_standard_blob_tier = std::move(other.m_standard_blob_tier); + m_premium_blob_tier = std::move(other.m_premium_blob_tier); + m_archive_status = std::move(other.m_archive_status); + m_access_tier_inferred = std::move(other.m_access_tier_inferred); + m_access_tier_change_time = std::move(other.m_access_tier_change_time); + m_version_id = std::move(other.m_version_id); } return *this; } @@ -1245,6 +1209,15 @@ namespace azure { namespace storage { return m_etag; } + /// + /// Gets the SHA-256 of the customer-provided key used to encrypt this blob. + /// + /// The base64-encoded SHA-256 value. + const utility::string_t& encryption_key_sha256() const + { + return m_encryption_key_sha256; + } + /// /// Gets the last-modified time for the blob, expressed as a UTC value. /// @@ -1317,6 +1290,69 @@ namespace azure { namespace storage { return m_server_encrypted; } + /// + /// Gets a value indicating whether or not this blob is an incremental copy. + /// + /// true if the blob is an incremental copy; otherwise, false. + bool is_incremental_copy() const + { + return m_is_incremental_copy; + } + + /// + /// Gets a value indicating the standard blob tier if the blob is a block blob. + /// + /// An enum that indicates the blob's tier. + azure::storage::standard_blob_tier standard_blob_tier() const + { + return m_standard_blob_tier; + } + + /// + /// Gets a value indicating the premium blob tier if the blob is a page blob. + /// + /// An enum that indicates the blob's tier. + azure::storage::premium_blob_tier premium_blob_tier() const + { + return m_premium_blob_tier; + } + + /// + /// Gets a value indicating the archive status of the blob. + /// + /// An object that indicates the blob's archive status. + azure::storage::archive_status archive_status() const + { + return m_archive_status; + } + + /// + /// Gets a value indicating whether or not the access tier is inferred. + /// + /// true if the access tier is not explicitly set on a page blob on premium accounts; otherwise, false. + bool access_tier_inferred() const + { + return m_access_tier_inferred; + } + + /// + /// Gets the access tier change time for the blob, expressed as a UTC value. + /// + /// The access tier change time, in UTC format. + utility::datetime access_tier_change_time() const + { + return m_access_tier_change_time; + } + + /// + /// Gets the version id of the blob. + /// + /// The version id the blob refers to. + const utility::string_t& version_id() const + { + return m_version_id; + } + private: /// @@ -1342,21 +1378,29 @@ namespace azure { namespace storage { utility::string_t m_content_md5; utility::string_t m_content_type; utility::string_t m_etag; + utility::string_t m_encryption_key_sha256; utility::datetime m_last_modified; + utility::datetime m_access_tier_change_time; + utility::string_t m_version_id; blob_type m_type; azure::storage::lease_status m_lease_status; azure::storage::lease_state m_lease_state; azure::storage::lease_duration m_lease_duration; + azure::storage::standard_blob_tier m_standard_blob_tier; + azure::storage::premium_blob_tier m_premium_blob_tier; + azure::storage::archive_status m_archive_status; int64_t m_page_blob_sequence_number; int m_append_blob_committed_block_count; bool m_server_encrypted; + bool m_is_incremental_copy; + bool m_access_tier_inferred; void copy_from_root(const cloud_blob_properties& root_blob_properties); void update_etag_and_last_modified(const cloud_blob_properties& parsed_properties); void update_size(const cloud_blob_properties& parsed_properties); void update_page_blob_sequence_number(const cloud_blob_properties& parsed_properties); void update_append_blob_committed_block_count(const cloud_blob_properties& parsed_properties); - void update_all(const cloud_blob_properties& parsed_properties, bool ignore_md5); + void update_all(const cloud_blob_properties& parsed_properties); void set_server_encrypted(bool server_encrypted) { @@ -1527,12 +1571,14 @@ namespace azure { namespace storage { blob_request_options() : request_options(), m_use_transactional_md5(false), + m_use_transactional_crc64(false), m_store_blob_content_md5(false), m_disable_content_md5_validation(false), + m_disable_content_crc64_validation(false), m_parallelism_factor(1), m_single_blob_upload_threshold(protocol::default_single_blob_upload_threshold), - m_stream_write_size(protocol::max_block_size), - m_stream_read_size(protocol::max_block_size), + m_stream_write_size(protocol::default_stream_write_size), + m_stream_read_size(protocol::default_stream_read_size), m_absorb_conditional_errors_on_retry(false) { } @@ -1561,13 +1607,16 @@ namespace azure { namespace storage { { request_options::operator=(std::move(other)); m_use_transactional_md5 = std::move(other.m_use_transactional_md5); + m_use_transactional_crc64 = std::move(other.m_use_transactional_crc64); m_store_blob_content_md5 = std::move(other.m_store_blob_content_md5); m_disable_content_md5_validation = std::move(other.m_disable_content_md5_validation); + m_disable_content_crc64_validation = std::move(other.m_disable_content_crc64_validation); m_parallelism_factor = std::move(other.m_parallelism_factor); m_single_blob_upload_threshold = std::move(other.m_single_blob_upload_threshold); m_stream_write_size = std::move(other.m_stream_write_size); m_stream_read_size = std::move(other.m_stream_read_size); m_absorb_conditional_errors_on_retry = std::move(other.m_absorb_conditional_errors_on_retry); + m_encryption_key = std::move(other.m_encryption_key); } return *this; } @@ -1595,13 +1644,21 @@ namespace azure { namespace storage { m_store_blob_content_md5.merge(other.m_store_blob_content_md5); } - m_use_transactional_md5.merge(other.m_use_transactional_md5); + // MD5 overrides CRC64 in the same scope. While explicit CRC64 overrides default MD5. + if (!m_use_transactional_crc64.has_value() || !m_use_transactional_crc64) + { + m_use_transactional_md5.merge(other.m_use_transactional_md5); + } + m_use_transactional_crc64.merge(other.m_use_transactional_crc64); m_disable_content_md5_validation.merge(other.m_disable_content_md5_validation); + m_disable_content_crc64_validation.merge(other.m_disable_content_crc64_validation); m_parallelism_factor.merge(other.m_parallelism_factor); m_single_blob_upload_threshold.merge(other.m_single_blob_upload_threshold); m_stream_write_size.merge(other.m_stream_write_size); m_stream_read_size.merge(other.m_stream_read_size); m_absorb_conditional_errors_on_retry.merge(other.m_absorb_conditional_errors_on_retry); + if (m_encryption_key.empty() && !other.m_encryption_key.empty()) + m_encryption_key = other.m_encryption_key; } /// @@ -1622,6 +1679,24 @@ namespace azure { namespace storage { m_use_transactional_md5 = value; } + /// + /// Gets a value indicating whether the content-CRC64 hash will be calculated and validated for the request. + /// + /// true if the content-CRC64 hash will be calculated and validated for the request; otherwise, false. + bool use_transactional_crc64() const + { + return m_use_transactional_crc64; + } + + /// + /// Indicates whether to calculate and validate the content-CRC64 hash for the request. + /// + /// true to calculate and validate the content-CRC64 hash for the request; otherwise, false. + void set_use_transactional_crc64(bool value) + { + m_use_transactional_crc64 = value; + } + /// /// Gets a value indicating whether the content-MD5 hash will be calculated and stored when uploading a blob. /// @@ -1658,11 +1733,29 @@ namespace azure { namespace storage { m_disable_content_md5_validation = value; } + /// + /// Gets a value indicating whether content-CRC64 validation will be disabled when downloading blobs. + /// + /// true to disable content-CRC64 validation; otherwise, false. + bool disable_content_crc64_validation() const + { + return m_disable_content_crc64_validation; + } + + /// + /// Indicates whether to disable content-CRC64 validation when downloading blobs. + /// + /// true to disable content-CRC64 validation; otherwise, false. + void set_disable_content_crc64_validation(bool value) + { + m_disable_content_crc64_validation = value; + } + /// /// Gets the maximum size of a blob in bytes that may be uploaded as a single blob. /// /// The maximum size of a blob, in bytes, that may be uploaded as a single blob, - /// ranging from between 1 and 64 MB inclusive. + /// ranging from between 1 and 5000 MB inclusive. utility::size64_t single_blob_upload_threshold_in_bytes() const { return m_single_blob_upload_threshold; @@ -1672,10 +1765,10 @@ namespace azure { namespace storage { /// Sets the maximum size of a blob in bytes that may be uploaded as a single blob. /// /// The maximum size of a blob, in bytes, that may be uploaded as a single blob, - /// ranging from between 1 and 64 MB inclusive. + /// ranging from between 1 and 5000 MB inclusive. void set_single_blob_upload_threshold_in_bytes(utility::size64_t value) { - utility::assert_in_bounds(_XPLATSTR("value"), value, 1 * 1024 * 1024, 64 * 1024 * 1024); + utility::assert_in_bounds(_XPLATSTR("value"), value, 1 * 1024 * 1024, protocol::max_single_blob_upload_threshold); m_single_blob_upload_threshold = value; } @@ -1704,7 +1797,7 @@ namespace azure { namespace storage { /// Gets the minimum number of bytes to buffer when reading from a blob stream. /// /// The minimum number of bytes to buffer, being at least 16KB. - size_t stream_read_size_in_bytes() const + option_with_default stream_read_size_in_bytes() const { return m_stream_read_size; } @@ -1722,8 +1815,8 @@ namespace azure { namespace storage { /// /// Gets the minimum number of bytes to buffer when writing to a blob stream. /// - /// The minimum number of bytes to buffer, ranging from between 16 KB and 4 MB inclusive. - size_t stream_write_size_in_bytes() const + /// The minimum number of bytes to buffer, ranging from between 16 KB and 4000 MB inclusive. + option_with_default stream_write_size_in_bytes() const { return m_stream_write_size; } @@ -1731,10 +1824,10 @@ namespace azure { namespace storage { /// /// Sets the minimum number of bytes to buffer when writing to a blob stream. /// - /// The minimum number of bytes to buffer, ranging from between 16 KB and 4 MB inclusive. + /// The minimum number of bytes to buffer, ranging from between 16 KB and 4000 MB inclusive. void set_stream_write_size_in_bytes(size_t value) { - utility::assert_in_bounds(_XPLATSTR("value"), value, 16 * 1024, 4 * 1024 * 1024); + utility::assert_in_bounds(_XPLATSTR("value"), value, 16 * 1024, protocol::max_block_size); m_stream_write_size = value; } @@ -1762,16 +1855,37 @@ namespace azure { namespace storage { m_absorb_conditional_errors_on_retry = value; } + /// + /// Gets the customer provided encryption key. + /// + /// The customer provided encryption key. + const std::vector& encryption_key() const + { + return m_encryption_key; + } + + /// + /// Sets the customer provided encryption key. + /// + /// The customer provided encryption key. + void set_encryption_key(std::vector encryption_key) + { + m_encryption_key = std::move(encryption_key); + } + private: option_with_default m_use_transactional_md5; + option_with_default m_use_transactional_crc64; option_with_default m_store_blob_content_md5; option_with_default m_disable_content_md5_validation; + option_with_default m_disable_content_crc64_validation; option_with_default m_parallelism_factor; option_with_default m_single_blob_upload_threshold; option_with_default m_stream_write_size; option_with_default m_stream_read_size; option_with_default m_absorb_conditional_errors_on_retry; + std::vector m_encryption_key; }; /// @@ -2103,7 +2217,8 @@ namespace azure { namespace storage { /// cloud_blob_container_properties() : m_lease_status(azure::storage::lease_status::unspecified), m_lease_state(azure::storage::lease_state::unspecified), - m_lease_duration(azure::storage::lease_duration::unspecified) + m_lease_duration(azure::storage::lease_duration::unspecified), + m_public_access(azure::storage::blob_container_public_access_type::off) { } @@ -2134,6 +2249,7 @@ namespace azure { namespace storage { m_lease_status = std::move(other.m_lease_status); m_lease_state = std::move(other.m_lease_state); m_lease_duration = std::move(other.m_lease_duration); + m_public_access = std::move(other.m_public_access); } return *this; } @@ -2184,6 +2300,15 @@ namespace azure { namespace storage { return m_lease_duration; } + /// + /// Gets the public access setting for the container. + /// + /// The public access setting for the container. + azure::storage::blob_container_public_access_type public_access() const + { + return m_public_access; + } + private: /// @@ -2201,6 +2326,7 @@ namespace azure { namespace storage { azure::storage::lease_status m_lease_status; azure::storage::lease_state m_lease_state; azure::storage::lease_duration m_lease_duration; + azure::storage::blob_container_public_access_type m_public_access; void update_etag_and_last_modified(const cloud_blob_container_properties& parsed_properties); @@ -2209,6 +2335,17 @@ namespace azure { namespace storage { friend class protocol::list_containers_reader; }; + struct user_delegation_key + { + utility::string_t signed_oid; + utility::string_t signed_tid; + utility::datetime signed_start; + utility::datetime signed_expiry; + utility::string_t signed_service; + utility::string_t signed_version; + utility::string_t key; + }; + /// /// Provides a client-side logical representation of the Windows Azure Blob Service. This client is used to configure and execute requests against the Blob Service. /// @@ -2364,7 +2501,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to return a result segment containing a collection of objects. + /// Initiates an asynchronous operation to return a result segment containing a collection of objects. /// /// An returned by a previous listing operation. /// A object of type that represents the current operation. @@ -2374,7 +2511,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to return a result segment containing a collection of objects. + /// Initiates an asynchronous operation to return a result segment containing a collection of objects. /// /// The container name prefix. /// An returned by a previous listing operation. @@ -2385,7 +2522,23 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to return a result segment containing a collection of objects. + /// Initiates an asynchronous operation to return a result segment containing a collection of objects. + /// + /// The container name prefix. + /// An enumeration describing which items to include in the listing. + /// A non-negative integer value that indicates the maximum number of results to be returned + /// in the result segment, up to the per-operation limit of 5000. If this value is 0, the maximum possible number of results will be returned, up to 5000. + /// An returned by a previous listing operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task list_containers_segmented_async(const utility::string_t& prefix, container_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context) const + { + return list_containers_segmented_async(prefix, includes, max_results, token, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to return a result segment containing a collection of objects. /// /// The container name prefix. /// An enumeration describing which items to include in the listing. @@ -2394,8 +2547,9 @@ namespace azure { namespace storage { /// An returned by a previous listing operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task list_containers_segmented_async(const utility::string_t& prefix, container_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context) const; + WASTORAGE_API pplx::task list_containers_segmented_async(const utility::string_t& prefix, container_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Returns an that can be used to to lazily enumerate a collection of blob items. @@ -2449,7 +2603,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to return an containing a collection of blob items in the container. + /// Initiates an asynchronous operation to return an containing a collection of blob items in the container. /// /// The blob name prefix. /// An returned by a previous listing operation. @@ -2460,7 +2614,24 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to return an containing a collection of blob items in the container. + /// Initiates an asynchronous operation to return an containing a collection of blob items in the container. + /// + /// The blob name prefix. + /// Indicates whether to list blobs in a flat listing, or whether to list blobs hierarchically, by virtual directory. + /// An enumeration describing which items to include in the listing. + /// A non-negative integer value that indicates the maximum number of results to be returned at a time, up to the + /// per-operation limit of 5000. If this value is 0, the maximum possible number of results will be returned, up to 5000. + /// An returned by a previous listing operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task list_blobs_segmented_async(const utility::string_t& prefix, bool use_flat_blob_listing, blob_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context) const + { + return list_blobs_segmented_async(prefix, use_flat_blob_listing, includes, max_results, token, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to return an containing a collection of blob items in the container. /// /// The blob name prefix. /// Indicates whether to list blobs in a flat listing, or whether to list blobs hierarchically, by virtual directory. @@ -2470,8 +2641,9 @@ namespace azure { namespace storage { /// An returned by a previous listing operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task list_blobs_segmented_async(const utility::string_t& prefix, bool use_flat_blob_listing, blob_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context) const; + WASTORAGE_API pplx::task list_blobs_segmented_async(const utility::string_t& prefix, bool use_flat_blob_listing, blob_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Gets the service properties for the Blob service client. @@ -2494,7 +2666,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to get the properties of the service. + /// Initiates an asynchronous operation to get the properties of the service. /// /// A object of type that represents the current operation. pplx::task download_service_properties_async() const @@ -2503,12 +2675,24 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to get the properties of the service. + /// Initiates an asynchronous operation to get the properties of the service. + /// + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task download_service_properties_async(const blob_request_options& options, operation_context context) const + { + return download_service_properties_async(options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to get the properties of the service. /// /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task download_service_properties_async(const blob_request_options& options, operation_context context) const; + WASTORAGE_API pplx::task download_service_properties_async(const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Sets the service properties for the Blob service client. @@ -2533,7 +2717,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to set the service properties for the Blob service client. + /// Initiates an asynchronous operation to set the service properties for the Blob service client. /// /// The for the Blob service client. /// An enumeration describing which items to include when setting service properties. @@ -2544,14 +2728,28 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to set the service properties for the Blob service client. + /// Initiates an asynchronous operation to set the service properties for the Blob service client. /// /// The for the Blob service client. /// An enumeration describing which items to include when setting service properties. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task upload_service_properties_async(const service_properties& properties, const service_properties_includes& includes, const blob_request_options& options, operation_context context) const; + pplx::task upload_service_properties_async(const service_properties& properties, const service_properties_includes& includes, const blob_request_options& options, operation_context context) const + { + return upload_service_properties_async(properties, includes, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to set the service properties for the Blob service client. + /// + /// The for the Blob service client. + /// An enumeration describing which items to include when setting service properties. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + WASTORAGE_API pplx::task upload_service_properties_async(const service_properties& properties, const service_properties_includes& includes, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Gets the service stats for the Blob service client. @@ -2574,7 +2772,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to get the stats of the service. + /// Initiates an asynchronous operation to get the stats of the service. /// /// A object of type that represents the current operation. pplx::task download_service_stats_async() const @@ -2583,12 +2781,73 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to get the stats of the service. + /// Initiates an asynchronous operation to get the stats of the service. + /// + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task download_service_stats_async(const blob_request_options& options, operation_context context) const + { + return download_service_stats_async(options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to get the stats of the service. + /// + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + WASTORAGE_API pplx::task download_service_stats_async(const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; + + /// + /// Gets the account properties the Blob service client. + /// + /// The for the Blob service client. + account_properties download_account_properties() const + { + return download_account_properties_async().get(); + } + + /// + /// Gets the account properties the Blob service client. + /// + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// The for the Blob service client. + account_properties download_account_properties(const blob_request_options& options, operation_context context) const + { + return download_account_properties_async(options, context).get(); + } + + /// + /// Initiates an asynchronous operation to get the account properties of the service. + /// + /// A object of type that represents the current operation. + pplx::task download_account_properties_async() const + { + return download_account_properties_async(blob_request_options(), operation_context()); + } + + /// + /// Initiates an asynchronous operation to get the account properties of the service. + /// + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task download_account_properties_async(const blob_request_options& options, operation_context context) const + { + return download_account_properties_async(options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to get the account properties of the service. /// /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task download_service_stats_async(const blob_request_options& options, operation_context context) const; + WASTORAGE_API pplx::task download_account_properties_async(const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Returns a reference to an object. @@ -2635,12 +2894,47 @@ namespace azure { namespace storage { m_directory_delimiter = std::move(value); } + /// + /// Gets a key that can be used to sign a user delegation SAS (shared access signature). + /// + /// The start time for the user delegation key. + /// The expiry time for the user delegation key. + /// A string containing user delegation key. + user_delegation_key get_user_delegation_key(const utility::datetime& start, const utility::datetime& expiry) + { + return get_user_delegation_key_async(start, expiry).get(); + } + + /// + /// Initiates an asynchronous operation to get a key that can be used to sign a user delegation SAS (shared access signature). + /// + /// The start time for the user delegation key. + /// The expiry time for the user delegation key. + /// A object of string that contains user delegation key. + pplx::task get_user_delegation_key_async(const utility::datetime& start, const utility::datetime& expiry) + { + return get_user_delegation_key_async(start, expiry, blob_request_options(), operation_context(), pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to get a key that can be used to sign a user delegation SAS (shared access signature). + /// + /// The start time for the user delegation key. + /// The expiry time for the user delegation key. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of string that contains user delegation key. + WASTORAGE_API pplx::task get_user_delegation_key_async(const utility::datetime& start, const utility::datetime& expiry, const request_options& modified_options, operation_context context, const pplx::cancellation_token& cancellation_token); + private: + pplx::task download_account_properties_base_async(const storage_uri& uri, const request_options& modified_options, operation_context context, const pplx::cancellation_token& cancellation_token) const; void initialize() { set_authentication_scheme(azure::storage::authentication_scheme::shared_key); - m_default_request_options.set_retry_policy(exponential_retry_policy()); + if (!m_default_request_options.retry_policy().is_valid()) + m_default_request_options.set_retry_policy(exponential_retry_policy()); m_directory_delimiter = protocol::directory_delimiter; } @@ -2648,7 +2942,10 @@ namespace azure { namespace storage { blob_request_options m_default_request_options; utility::string_t m_directory_delimiter; - }; + + friend class cloud_blob_container; + friend class cloud_blob; + }; //cloud_blob_client /// /// Represents a container in the Windows Azure Blob service. @@ -2656,6 +2953,7 @@ namespace azure { namespace storage { class cloud_blob_container { public: + /// /// Initializes a new instance of the class. /// @@ -2740,6 +3038,14 @@ namespace azure { namespace storage { /// A string containing a shared access signature. WASTORAGE_API utility::string_t get_shared_access_signature(const blob_shared_access_policy& policy, const utility::string_t& stored_policy_identifier) const; + /// + /// Returns a user delegation SAS for the container. + /// + /// User delegation key used to sign this SAS. + /// The access policy for the shared access signature. + /// A string containing a shared access signature. + WASTORAGE_API utility::string_t get_user_delegation_sas(const user_delegation_key& key, const blob_shared_access_policy& policy) const; + /// /// Gets a reference to a blob in this container. /// @@ -2827,7 +3133,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to retrieve the container's attributes. + /// Initiates an asynchronous operation to retrieve the container's attributes. /// /// A object that represents the current operation. pplx::task download_attributes_async() @@ -2836,13 +3142,26 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to retrieve the container's attributes. + /// Initiates an asynchronous operation to retrieve the container's attributes. + /// + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task download_attributes_async(const access_condition& condition, const blob_request_options& options, operation_context context) + { + return download_attributes_async(condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to retrieve the container's attributes. /// /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task download_attributes_async(const access_condition& condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task download_attributes_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Sets the container's user-defined metadata. @@ -2864,7 +3183,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to set the container's user-defined metadata. + /// Initiates an asynchronous operation to set the container's user-defined metadata. /// /// A object that represents the current operation. pplx::task upload_metadata_async() @@ -2873,13 +3192,75 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to set the container's user-defined metadata. + /// Initiates an asynchronous operation to set the container's user-defined metadata. + /// + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + WASTORAGE_API pplx::task upload_metadata_async(const access_condition& condition, const blob_request_options& options, operation_context context) + { + return upload_metadata_async(condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to set the container's user-defined metadata. /// /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task upload_metadata_async(const access_condition& condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task upload_metadata_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); + + /// + /// Gets properties for the account this container resides on. + /// + /// The for the Blob service client. + account_properties download_account_properties() const + { + return download_account_properties_async().get(); + } + + /// + /// Gets properties for the account this container resides on. + /// + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// The for the Blob service client. + account_properties download_account_properties(const blob_request_options& options, operation_context context) const + { + return download_account_properties_async(options, context).get(); + } + + /// + /// Initiates an asynchronous operation to get properties for the account this container resides on. + /// + /// A object of type that represents the current operation. + pplx::task download_account_properties_async() const + { + return download_account_properties_async(blob_request_options(), operation_context()); + } + + /// + /// Initiates an asynchronous operation to get properties for the account this container resides on. + /// + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task download_account_properties_async(const blob_request_options& options, operation_context context) const + { + return download_account_properties_async(options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to get properties for the account this container resides on. + /// + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + WASTORAGE_API pplx::task download_account_properties_async(const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Acquires a lease on the container. @@ -2907,7 +3288,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to acquire a lease on the container. + /// Initiates an asynchronous operation to acquire a lease on the container. /// /// An object representing the span of time for which to acquire the lease. /// A string representing the proposed lease ID for the new lease. May be an empty string if no lease ID is proposed.. @@ -2918,15 +3299,30 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to acquire a lease on the container. + /// Initiates an asynchronous operation to acquire a lease on the container. + /// + /// An representing the span of time for which to acquire the lease. + /// A string representing the proposed lease ID for the new lease. May be an empty string if no lease ID is proposed. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task acquire_lease_async(const azure::storage::lease_time& duration, const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context) const + { + return acquire_lease_async(duration, proposed_lease_id, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to acquire a lease on the container. /// /// An representing the span of time for which to acquire the lease. /// A string representing the proposed lease ID for the new lease. May be an empty string if no lease ID is proposed. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task acquire_lease_async(const azure::storage::lease_time& duration, const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context) const; + WASTORAGE_API pplx::task acquire_lease_async(const azure::storage::lease_time& duration, const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Renews a lease on the container. @@ -2948,7 +3344,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to renew a lease on the container. + /// Initiates an asynchronous operation to renew a lease on the container. /// /// A object that represents the current operation. pplx::task renew_lease_async() const @@ -2957,13 +3353,26 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to renew a lease on the container. + /// Initiates an asynchronous operation to renew a lease on the container. + /// + /// An object that represents the access condition for the operation, including a required lease ID. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task renew_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context) const + { + return renew_lease_async(condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to renew a lease on the container. /// /// An object that represents the access condition for the operation, including a required lease ID. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task renew_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context) const; + WASTORAGE_API pplx::task renew_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Changes the lease ID for a lease on the container. @@ -2989,7 +3398,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to change the lease ID for a lease on the container. + /// Initiates an asynchronous operation to change the lease ID for a lease on the container. /// /// A string containing the proposed lease ID for the lease. May not be empty. /// A object of type that represents the current operation. @@ -2999,14 +3408,28 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to change the lease ID for a lease on the container. + /// Initiates an asynchronous operation to change the lease ID for a lease on the container. + /// + /// A string containing the proposed lease ID for the lease. May not be empty. + /// An object that represents the access condition for the operation, including a required lease ID. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task change_lease_async(const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context) const + { + return change_lease_async(proposed_lease_id, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to change the lease ID for a lease on the container. /// /// A string containing the proposed lease ID for the lease. May not be empty. /// An object that represents the access condition for the operation, including a required lease ID. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task change_lease_async(const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context) const; + WASTORAGE_API pplx::task change_lease_async(const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Releases a lease on the container. @@ -3028,7 +3451,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to release a lease on the container. + /// Initiates an asynchronous operation to release a lease on the container. /// /// A object that represents the current operation. pplx::task release_lease_async() const @@ -3037,13 +3460,26 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to release a lease on the container. + /// Initiates an asynchronous operation to release a lease on the container. + /// + /// An object that represents the access condition for the operation, including a required lease ID. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task release_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context) const + { + return release_lease_async(condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to release a lease on the container. /// /// An object that represents the access condition for the operation, including a required lease ID. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task release_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context) const; + WASTORAGE_API pplx::task release_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Breaks the current lease on the container. @@ -3069,7 +3505,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to break the current lease on the container. + /// Initiates an asynchronous operation to break the current lease on the container. /// /// An representing the amount of time to allow the lease to remain. /// A object of type that represents the current operation. @@ -3079,14 +3515,28 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to break the current lease on the container. + /// Initiates an asynchronous operation to break the current lease on the container. /// /// An representing the amount of time to allow the lease to remain. /// An object that represents the access condition for the operation, including a required lease ID. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task break_lease_async(const azure::storage::lease_break_period& break_period, const access_condition& condition, const blob_request_options& options, operation_context context) const; + pplx::task break_lease_async(const azure::storage::lease_break_period& break_period, const access_condition& condition, const blob_request_options& options, operation_context context) const + { + return break_lease_async(break_period, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to break the current lease on the container. + /// + /// An representing the amount of time to allow the lease to remain. + /// An object that represents the access condition for the operation, including a required lease ID. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + WASTORAGE_API pplx::task break_lease_async(const azure::storage::lease_break_period& break_period, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Creates the container. @@ -3109,7 +3559,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to create the container. + /// Initiates an asynchronous operation to create the container. /// /// A object that represents the current operation. pplx::task create_async() @@ -3118,13 +3568,26 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to create the container. + /// Initiates an asynchronous operation to create the container. + /// + /// An value that specifies whether data in the container may be accessed publicly and what level of access is to be allowed. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task create_async(blob_container_public_access_type public_access, const blob_request_options& options, operation_context context) + { + return create_async(public_access, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to create the container. /// /// An value that specifies whether data in the container may be accessed publicly and what level of access is to be allowed. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task create_async(blob_container_public_access_type public_access, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task create_async(blob_container_public_access_type public_access, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Creates the container if it does not already exist. @@ -3148,7 +3611,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to create the container if it does not already exist. + /// Initiates an asynchronous operation to create the container if it does not already exist. /// /// A object that represents the current operation. pplx::task create_if_not_exists_async() @@ -3157,13 +3620,26 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to create the container if it does not already exist and specify the level of public access to the container's data. + /// Initiates an asynchronous operation to create the container if it does not already exist and specify the level of public access to the container's data. + /// + /// An value that specifies whether data in the container may be accessed publicly and what level of access is to be allowed. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task create_if_not_exists_async(blob_container_public_access_type public_access, const blob_request_options& options, operation_context context) + { + return create_if_not_exists_async(public_access, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to create the container if it does not already exist and specify the level of public access to the container's data. /// /// An value that specifies whether data in the container may be accessed publicly and what level of access is to be allowed. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task create_if_not_exists_async(blob_container_public_access_type public_access, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task create_if_not_exists_async(blob_container_public_access_type public_access, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Deletes the container. @@ -3185,7 +3661,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to delete the container. + /// Initiates an asynchronous operation to delete the container. /// /// A object that represents the current operation. pplx::task delete_container_async() @@ -3194,13 +3670,26 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to delete the container. + /// Initiates an asynchronous operation to delete the container. + /// + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task delete_container_async(const access_condition& condition, const blob_request_options& options, operation_context context) + { + return delete_container_async(condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to delete the container. /// /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task delete_container_async(const access_condition& condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task delete_container_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Deletes the container if it already exists. @@ -3224,7 +3713,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to delete the container if it already exists. + /// Initiates an asynchronous operation to delete the container if it already exists. /// /// true if the container did not already exist and was created; otherwise false. /// A object that represents the current operation. @@ -3234,14 +3723,28 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to delete the container if it already exists. + /// Initiates an asynchronous operation to delete the container if it already exists. + /// + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// true if the container did not already exist and was created; otherwise false. + /// A object that represents the current operation. + pplx::task delete_container_if_exists_async(const access_condition& condition, const blob_request_options& options, operation_context context) + { + return delete_container_if_exists_async(condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to delete the container if it already exists. /// /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. /// true if the container did not already exist and was created; otherwise false. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task delete_container_if_exists_async(const access_condition& condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task delete_container_if_exists_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Returns an that can be used to to lazily enumerate a collection of blob items in the container. @@ -3304,7 +3807,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to return an containing a collection of blob items + /// Initiates an asynchronous operation to return an containing a collection of blob items /// in the container. /// /// A continuation token returned by a previous listing operation. @@ -3315,7 +3818,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to return an containing a collection of blob items + /// Initiates an asynchronous operation to return an containing a collection of blob items /// in the container. /// /// The blob name prefix. @@ -3327,7 +3830,25 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to return an containing a collection of blob items + /// Initiates an asynchronous operation to return an containing a collection of blob items + /// in the container. + /// + /// The blob name prefix. + /// Indicates whether to list blobs in a flat listing, or whether to list blobs hierarchically, by virtual directory. + /// An enumeration describing which items to include in the listing. + /// A non-negative integer value that indicates the maximum number of results to be returned at a time, up to the + /// per-operation limit of 5000. If this value is 0, the maximum possible number of results will be returned, up to 5000. + /// A continuation token returned by a previous listing operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task list_blobs_segmented_async(const utility::string_t& prefix, bool use_flat_blob_listing, blob_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context) const + { + return list_blobs_segmented_async(prefix, use_flat_blob_listing, includes, max_results, token, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to return an containing a collection of blob items /// in the container. /// /// The blob name prefix. @@ -3338,8 +3859,9 @@ namespace azure { namespace storage { /// A continuation token returned by a previous listing operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task list_blobs_segmented_async(const utility::string_t& prefix, bool use_flat_blob_listing, blob_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context) const; + WASTORAGE_API pplx::task list_blobs_segmented_async(const utility::string_t& prefix, bool use_flat_blob_listing, blob_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Sets permissions for the container. @@ -3363,7 +3885,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to set permissions for the container. + /// Initiates an asynchronous operation to set permissions for the container. /// /// The permissions to apply to the container. /// A object that represents the current operation. @@ -3373,14 +3895,28 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to set permissions for the container. + /// Initiates an asynchronous operation to set permissions for the container. + /// + /// The permissions to apply to the container. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task upload_permissions_async(const blob_container_permissions& permissions, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return upload_permissions_async(permissions, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to set permissions for the container. /// /// The permissions to apply to the container. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task upload_permissions_async(const blob_container_permissions& permissions, const access_condition& condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task upload_permissions_async(const blob_container_permissions& permissions, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Gets the permissions settings for the container. @@ -3404,7 +3940,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to get permissions settings for the container. + /// Initiates an asynchronous operation to get permissions settings for the container. /// /// A object of type that represents the current operation. pplx::task download_permissions_async() @@ -3413,13 +3949,26 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to get permissions settings for the container. + /// Initiates an asynchronous operation to get permissions settings for the container. + /// + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task download_permissions_async(const access_condition& condition, const blob_request_options& options, operation_context context) + { + return download_permissions_async(condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to get permissions settings for the container. /// /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task download_permissions_async(const access_condition& condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task download_permissions_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Checks existence of the container. @@ -3442,7 +3991,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to check the existence of the container. + /// Initiates an asynchronous operation to check the existence of the container. /// /// A object that represents the current operation. pplx::task exists_async() @@ -3451,14 +4000,26 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to check the existence of the container. + /// Initiates an asynchronous operation to check the existence of the container. /// /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. /// A object that that represents the current operation. pplx::task exists_async(const blob_request_options& options, operation_context context) { - return exists_async(false, options, context); + return exists_async(options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to check the existence of the container. + /// + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that that represents the current operation. + pplx::task exists_async(const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + return exists_async_impl(false, options, context, cancellation_token); } /// @@ -3527,14 +4088,14 @@ namespace azure { namespace storage { private: void init(storage_credentials credentials); - WASTORAGE_API pplx::task exists_async(bool primary_only, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task exists_async_impl(bool primary_only, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); cloud_blob_client m_client; utility::string_t m_name; storage_uri m_uri; std::shared_ptr m_metadata; std::shared_ptr m_properties; - }; + }; // End of cloud_blob_container /// /// Represents a virtual directory of blobs, designated by a delimiter character. @@ -3703,7 +4264,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to return an containing a collection of blob items + /// Initiates an asynchronous operation to return an containing a collection of blob items /// in the container. /// /// A continuation token returned by a previous listing operation. @@ -3714,7 +4275,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to return an containing a collection of blob items + /// Initiates an asynchronous operation to return an containing a collection of blob items /// in the container. /// /// Indicates whether to list blobs in a flat listing, or whether to list blobs hierarchically, by virtual directory. @@ -3725,13 +4286,31 @@ namespace azure { namespace storage { /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task list_blobs_segmented_async(bool use_flat_blob_listing, blob_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context) const; + pplx::task list_blobs_segmented_async(bool use_flat_blob_listing, blob_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context) const + { + return list_blobs_segmented_async(use_flat_blob_listing, includes, max_results, token, options, context, pplx::cancellation_token::none()); + } /// - /// Gets the Blob service client for the virtual directory. + /// Initiates an asynchronous operation to return an containing a collection of blob items + /// in the container. /// - /// A client object that specifies the endpoint for the Windows Azure Blob service. - const cloud_blob_client& service_client() const + /// Indicates whether to list blobs in a flat listing, or whether to list blobs hierarchically, by virtual directory. + /// An enumeration describing which items to include in the listing. + /// A non-negative integer value that indicates the maximum number of results to be returned at a time, up to the + /// per-operation limit of 5000. If this value is 0, the maximum possible number of results will be returned, up to 5000. + /// A continuation token returned by a previous listing operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + WASTORAGE_API pplx::task list_blobs_segmented_async(bool use_flat_blob_listing, blob_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; + + /// + /// Gets the Blob service client for the virtual directory. + /// + /// A client object that specifies the endpoint for the Windows Azure Blob service. + const cloud_blob_client& service_client() const { return m_container.service_client(); } @@ -3854,6 +4433,7 @@ namespace azure { namespace storage { m_copy_state = std::move(other.m_copy_state); m_name = std::move(other.m_name); m_snapshot_time = std::move(other.m_snapshot_time); + m_version_id = std::move(other.m_version_id); m_container = std::move(other.m_container); m_uri = std::move(other.m_uri); } @@ -3904,6 +4484,27 @@ namespace azure { namespace storage { /// A string containing a shared access signature. WASTORAGE_API utility::string_t get_shared_access_signature(const blob_shared_access_policy& policy, const utility::string_t& stored_policy_identifier, const cloud_blob_shared_access_headers& headers) const; + + /// + /// Returns a user delegation SAS for the blob. + /// + /// User delegation key used to sign this SAS. + /// The access policy for the shared access signature. + /// A string containing a shared access signature. + utility::string_t get_user_delegation_sas(const user_delegation_key& key, const blob_shared_access_policy& policy) const + { + return get_user_delegation_sas(key, policy, cloud_blob_shared_access_headers()); + } + + /// + /// Returns a user delegation SAS for the blob. + /// + /// User delegation key used to sign this SAS. + /// The access policy for the shared access signature. + /// The optional header values to set for a blob returned with this SAS. + /// A string containing a shared access signature. + WASTORAGE_API utility::string_t get_user_delegation_sas(const user_delegation_key& key, const blob_shared_access_policy& policy, const cloud_blob_shared_access_headers& headers) const; + /// /// Opens a stream for reading from the blob. /// @@ -3926,7 +4527,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to open a stream for reading from the blob. + /// Initiates an asynchronous operation to open a stream for reading from the blob. /// /// A object of type that represents the current operation. pplx::task open_read_async() @@ -3935,13 +4536,26 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to open a stream for reading from the blob. + /// Initiates an asynchronous operation to open a stream for reading from the blob. + /// + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task open_read_async(const access_condition& condition, const blob_request_options& options, operation_context context) + { + return open_read_async(condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to open a stream for reading from the blob. /// /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task open_read_async(const access_condition& condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task open_read_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Checks existence of the blob. @@ -3964,7 +4578,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to check the existence of the blob. + /// Initiates an asynchronous operation to check the existence of the blob. /// /// A object that represents the current operation. pplx::task exists_async() @@ -3973,14 +4587,26 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to check the existence of the blob. + /// Initiates an asynchronous operation to check the existence of the blob. /// /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. /// A object that represents the current operation. pplx::task exists_async(const blob_request_options& options, operation_context context) { - return exists_async(false, options, context); + return exists_async(options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to check the existence of the blob. + /// + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + pplx::task exists_async(const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + return exists_async_impl(false, options, context, cancellation_token); } /// @@ -4003,7 +4629,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to populate a blob's properties and metadata. + /// Initiates an asynchronous operation to populate a blob's properties and metadata. /// /// A object that represents the current operation. pplx::task download_attributes_async() @@ -4012,13 +4638,29 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to populate a blob's properties and metadata. + /// Initiates an asynchronous operation to populate a blob's properties and metadata. + /// + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task download_attributes_async(const access_condition& condition, const blob_request_options& options, operation_context context) + { + return download_attributes_async(condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to populate a blob's properties and metadata. /// /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task download_attributes_async(const access_condition& condition, const blob_request_options& options, operation_context context); + pplx::task download_attributes_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + return download_attributes_async_impl(condition, options, context, cancellation_token); + } /// /// Updates the blob's metadata. @@ -4040,7 +4682,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to update the blob's metadata. + /// Initiates an asynchronous operation to update the blob's metadata. /// /// A object that represents the current operation. pplx::task upload_metadata_async() @@ -4049,13 +4691,26 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to update the blob's metadata. + /// Initiates an asynchronous operation to update the blob's metadata. + /// + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task upload_metadata_async(const access_condition& condition, const blob_request_options& options, operation_context context) + { + return upload_metadata_async(condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to update the blob's metadata. /// /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task upload_metadata_async(const access_condition& condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task upload_metadata_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Updates the blob's properties. @@ -4077,7 +4732,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to update the blob's properties. + /// Initiates an asynchronous operation to update the blob's properties. /// /// A object that represents the current operation. pplx::task upload_properties_async() @@ -4086,13 +4741,79 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to update the blob's properties. + /// Initiates an asynchronous operation to update the blob's properties. + /// + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + pplx::task upload_properties_async(const access_condition& condition, const blob_request_options& options, operation_context context) + { + return upload_properties_async(condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to update the blob's properties. /// /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task upload_properties_async(const access_condition& condition, const blob_request_options& options, operation_context context); + pplx::task upload_properties_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + return upload_properties_async_impl(condition, options, context, cancellation_token, true); + } + + /// + /// Gets properties for the account this blob resides on. + /// + /// The for the Blob service client. + account_properties download_account_properties() const + { + return download_account_properties_async().get(); + } + + /// + /// Gets properties for the account this blob resides on. + /// + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// The for the Blob service client. + account_properties download_account_properties(const blob_request_options& options, operation_context context) const + { + return download_account_properties_async(options, context).get(); + } + + /// + /// Initiates an asynchronous operation to get properties for the account this blob resides on. + /// + /// A object of type that represents the current operation. + pplx::task download_account_properties_async() const + { + return download_account_properties_async(blob_request_options(), operation_context()); + } + + /// + /// Initiates an asynchronous operation to get properties for the account this blob resides on. + /// + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task download_account_properties_async(const blob_request_options& options, operation_context context) const + { + return download_account_properties_async(options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to get properties for the account this blob resides on. + /// + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + WASTORAGE_API pplx::task download_account_properties_async(const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Deletes the blob. @@ -4115,7 +4836,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to delete the blob. + /// Initiates an asynchronous operation to delete the blob. /// /// A object that represents the current operation. pplx::task delete_blob_async() @@ -4124,14 +4845,28 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to delete the blob. + /// Initiates an asynchronous operation to delete the blob. + /// + /// Indicates whether to delete only the blob, to delete the blob and all snapshots, or to delete only snapshots. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + WASTORAGE_API pplx::task delete_blob_async(delete_snapshots_option snapshots_option, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return delete_blob_async(snapshots_option, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to delete the blob. /// /// Indicates whether to delete only the blob, to delete the blob and all snapshots, or to delete only snapshots. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task delete_blob_async(delete_snapshots_option snapshots_option, const access_condition& condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task delete_blob_async(delete_snapshots_option snapshots_option, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Deletes the blob if it already exists. @@ -4156,7 +4891,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to delete the blob if it already exists. + /// Initiates an asynchronous operation to delete the blob if it already exists. /// /// A object that represents the current operation. pplx::task delete_blob_if_exists_async() @@ -4165,14 +4900,28 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to delete the blob if it already exists. + /// Initiates an asynchronous operation to delete the blob if it already exists. + /// + /// Indicates whether to delete only the blob, to delete the blob and all snapshots, or to delete only snapshots. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task delete_blob_if_exists_async(delete_snapshots_option snapshots_option, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return delete_blob_if_exists_async(snapshots_option, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to delete the blob if it already exists. /// /// Indicates whether to delete only the blob, to delete the blob and all snapshots, or to delete only snapshots. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task delete_blob_if_exists_async(delete_snapshots_option snapshots_option, const access_condition& condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task delete_blob_if_exists_async(delete_snapshots_option snapshots_option, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Acquires a lease on the blob. @@ -4200,7 +4949,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to acquire a lease on the blob. + /// Initiates an asynchronous operation to acquire a lease on the blob. /// /// An representing the span of time for which to acquire the lease. /// A string representing the proposed lease ID for the new lease. May be an empty string if no lease ID is proposed. @@ -4211,15 +4960,30 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to acquire a lease on the blob. + /// Initiates an asynchronous operation to acquire a lease on the blob. + /// + /// An representing the span of time for which to acquire the lease. + /// A string representing the proposed lease ID for the new lease. May be an empty string if no lease ID is proposed. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task acquire_lease_async(const azure::storage::lease_time& duration, const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context) const + { + return acquire_lease_async(duration, proposed_lease_id, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to acquire a lease on the blob. /// /// An representing the span of time for which to acquire the lease. /// A string representing the proposed lease ID for the new lease. May be an empty string if no lease ID is proposed. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task acquire_lease_async(const azure::storage::lease_time& duration, const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context) const; + WASTORAGE_API pplx::task acquire_lease_async(const azure::storage::lease_time& duration, const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Renews a lease on the blob. @@ -4242,7 +5006,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to renew a lease on the blob. + /// Initiates an asynchronous operation to renew a lease on the blob. /// /// An object that represents the access conditions for the blob, including a required lease ID. /// A object that represents the current operation. @@ -4252,13 +5016,26 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to renew a lease on the blob. + /// Initiates an asynchronous operation to renew a lease on the blob. + /// + /// An object that represents the access conditions for the blob, including a required lease ID. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task renew_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context) const + { + return renew_lease_async(condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to renew a lease on the blob. /// /// An object that represents the access conditions for the blob, including a required lease ID. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task renew_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context) const; + WASTORAGE_API pplx::task renew_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Changes the lease ID on the blob. @@ -4285,7 +5062,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to change the lease ID on the blob. + /// Initiates an asynchronous operation to change the lease ID on the blob. /// /// A string containing the proposed lease ID for the lease. May not be empty. /// An object that represents the access conditions for the blob, including a required lease ID. @@ -4296,14 +5073,28 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to change the lease ID on the blob. + /// Initiates an asynchronous operation to change the lease ID on the blob. + /// + /// A string containing the proposed lease ID for the lease. May not be empty. + /// An object that represents the access conditions for the blob, including a required lease ID. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + WASTORAGE_API pplx::task change_lease_async(const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context) const + { + return change_lease_async(proposed_lease_id, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to change the lease ID on the blob. /// /// A string containing the proposed lease ID for the lease. May not be empty. /// An object that represents the access conditions for the blob, including a required lease ID. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task change_lease_async(const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context) const; + WASTORAGE_API pplx::task change_lease_async(const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Releases the lease on the blob. @@ -4326,7 +5117,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to release the lease on the blob. + /// Initiates an asynchronous operation to release the lease on the blob. /// /// An object that represents the access conditions for the blob, including a required lease ID. /// A object that represents the current operation. @@ -4336,13 +5127,26 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to release the lease on the blob. + /// Initiates an asynchronous operation to release the lease on the blob. + /// + /// An object that represents the access conditions for the blob, including a required lease ID. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task release_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context) const + { + return release_lease_async(condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to release the lease on the blob. /// /// An object that represents the access conditions for the blob, including a required lease ID. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task release_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context) const; + WASTORAGE_API pplx::task release_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Breaks the current lease on the blob. @@ -4368,7 +5172,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to break the current lease on the blob. + /// Initiates an asynchronous operation to break the current lease on the blob. /// /// An representing the amount of time to allow the lease to remain. /// A object of type that represents the current operation. @@ -4378,14 +5182,28 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to break the current lease on the blob. + /// Initiates an asynchronous operation to break the current lease on the blob. + /// + /// An representing the amount of time to allow the lease to remain. + /// An object that represents the access conditions for the blob, including a required lease ID. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task break_lease_async(const azure::storage::lease_break_period& break_period, const access_condition& condition, const blob_request_options& options, operation_context context) const + { + return break_lease_async(break_period, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to break the current lease on the blob. /// /// An representing the amount of time to allow the lease to remain. /// An object that represents the access conditions for the blob, including a required lease ID. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task break_lease_async(const azure::storage::lease_break_period& break_period, const access_condition& condition, const blob_request_options& options, operation_context context) const; + WASTORAGE_API pplx::task break_lease_async(const azure::storage::lease_break_period& break_period, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Downloads the contents of a blob to a stream. @@ -4409,7 +5227,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to download the contents of a blob to a stream. + /// Initiates an asynchronous operation to download the contents of a blob to a stream. /// /// The target stream. /// A object that represents the current operation. @@ -4419,7 +5237,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to download the contents of a blob to a stream. + /// Initiates an asynchronous operation to download the contents of a blob to a stream. /// /// The target stream. /// An object that represents the access condition for the operation. @@ -4428,7 +5246,21 @@ namespace azure { namespace storage { /// A object that represents the current operation. pplx::task download_to_stream_async(concurrency::streams::ostream target, const access_condition& condition, const blob_request_options& options, operation_context context) { - return download_range_to_stream_async(target, std::numeric_limits::max(), 0, condition, options, context); + return download_to_stream_async(target, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to download the contents of a blob to a stream. + /// + /// The target stream. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + pplx::task download_to_stream_async(concurrency::streams::ostream target, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + return download_range_to_stream_async(target, std::numeric_limits::max(), 0, condition, options, context, cancellation_token); } /// @@ -4457,7 +5289,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to download a range of bytes in a blob to a stream. + /// Initiates an asynchronous operation to download a range of bytes in a blob to a stream. /// /// The target stream. /// The offset at which to begin downloading the blob, in bytes. @@ -4469,7 +5301,22 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to download a range of bytes in a blob to a stream. + /// Initiates an asynchronous operation to download a range of bytes in a blob to a stream. + /// + /// The target stream. + /// The offset at which to begin downloading the blob, in bytes. + /// The length of the data to download from the blob, in bytes. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task download_range_to_stream_async(concurrency::streams::ostream target, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return download_range_to_stream_async(target, offset, length, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to download a range of bytes in a blob to a stream. /// /// The target stream. /// The offset at which to begin downloading the blob, in bytes. @@ -4477,8 +5324,9 @@ namespace azure { namespace storage { /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task download_range_to_stream_async(concurrency::streams::ostream target, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task download_range_to_stream_async(concurrency::streams::ostream target, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Downloads the contents of a blob to a file. @@ -4502,7 +5350,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to download the contents of a blob to a file. + /// Initiates an asynchronous operation to download the contents of a blob to a file. /// /// The target file. /// A object that represents the current operation. @@ -4512,14 +5360,28 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to download the contents of a blob to a file. + /// Initiates an asynchronous operation to download the contents of a blob to a file. + /// + /// The target file. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task download_to_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return download_to_file_async(path, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to download the contents of a blob to a file. /// /// The target file. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task download_to_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task download_to_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Begins an operation to copy a blob's contents, properties, and metadata to a new blob. @@ -4594,7 +5456,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. + /// Initiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. /// /// The URI of a source blob. /// A object of type that represents the current operation. @@ -4610,7 +5472,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. + /// Initiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. /// /// The URI of a source blob. /// A object of type that represents the current operation. @@ -4626,7 +5488,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. + /// Initiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. /// /// The URI of a source blob. /// An object that represents the for the source blob. @@ -4643,7 +5505,7 @@ namespace azure { namespace storage { WASTORAGE_API pplx::task start_copy_from_blob_async(const web::http::uri& source, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context); /// - /// Intitiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. + /// Initiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. /// /// The URI of a source blob. /// An object that represents the for the source blob. @@ -4691,6 +5553,25 @@ namespace azure { namespace storage { return start_copy_async(source, source_condition, destination_condition, options, context).get(); } + /// + /// Begins an operation to copy a blob's contents, properties, and metadata to a new blob. + /// + /// The URI of a source blob. + /// Metadata that will be set on the destination blob. + /// An object that represents the for the source blob. + /// An object that represents the for the destination blob. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// The copy ID associated with the copy operation. + /// + /// This method fetches the blob's ETag, last-modified time, and part of the copy state. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + utility::string_t start_copy(const web::http::uri& source, const cloud_metadata& metadata, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context) + { + return start_copy_async(source, metadata, source_condition, destination_condition, options, context, pplx::cancellation_token::none()).get(); + } + /// /// Begins an operation to copy a blob's contents, properties, and metadata to a new blob. /// @@ -4723,6 +5604,25 @@ namespace azure { namespace storage { return start_copy_async(source, source_condition, destination_condition, options, context).get(); } + /// + /// Begins an operation to copy a blob's contents, properties, and metadata to a new blob. + /// + /// The URI of a source blob. + /// Metadata that will be set on the destination blob. + /// An object that represents the for the source blob. + /// An object that represents the for the destination blob. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// The copy ID associated with the copy operation. + /// + /// This method fetches the blob's ETag, last-modified time, and part of the copy state. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + utility::string_t start_copy(const cloud_blob& source, const cloud_metadata& metadata, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context) + { + return start_copy_async(source, metadata, source_condition, destination_condition, options, context, pplx::cancellation_token::none()).get(); + } + /// /// Begins an operation to copy a file's contents, properties, and metadata to a new blob. /// @@ -4756,7 +5656,26 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. + /// Begins an operation to copy a file's contents, properties, and metadata to a new blob. + /// + /// The URI of a source file. + /// Metadata that will be set on the destination blob. + /// An object that represents the for the source blob. + /// An object that represents the for the destination blob. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// The copy ID associated with the copy operation. + /// + /// This method fetches the blob's ETag, last-modified time, and part of the copy state. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + utility::string_t start_copy(const cloud_file& source, const cloud_metadata& metadata, const file_access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context) + { + return start_copy_async(source, metadata, source_condition, destination_condition, options, context, pplx::cancellation_token::none()).get(); + } + + /// + /// Initiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. /// /// The URI of a source blob. /// A object of type that represents the current operation. @@ -4770,7 +5689,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. + /// Initiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. /// /// The URI of a source blob. /// A object of type that represents the current operation. @@ -4784,7 +5703,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to begin to copy a file's contents, properties, and metadata to a new blob. + /// Initiates an asynchronous operation to begin to copy a file's contents, properties, and metadata to a new blob. /// /// The URI of a source file. /// A object of type that represents the current operation. @@ -4795,7 +5714,7 @@ namespace azure { namespace storage { WASTORAGE_API pplx::task start_copy_async(const cloud_file& source); /// - /// Intitiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. + /// Initiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. /// /// The URI of a source blob. /// An object that represents the for the source blob. @@ -4807,37 +5726,158 @@ namespace azure { namespace storage { /// This method fetches the blob's ETag, last-modified time, and part of the copy state. /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. /// - WASTORAGE_API pplx::task start_copy_async(const web::http::uri& source, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context); - + pplx::task start_copy_async(const web::http::uri& source, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context) + { + return start_copy_async(source, source_condition, destination_condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. + /// + /// The URI of a source blob. + /// An object that represents the for the source blob. + /// An object that represents the for the destination blob. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + /// + /// This method fetches the blob's ETag, last-modified time, and part of the copy state. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + pplx::task start_copy_async(const web::http::uri& source, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + return start_copy_async(source, cloud_metadata(), source_condition, destination_condition, options, context, cancellation_token); + } + + /// + /// Initiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. + /// + /// The URI of a source blob. + /// Metadata that will be set on the destination blob. + /// An object that represents the for the source blob. + /// An object that represents the for the destination blob. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + /// + /// This method fetches the blob's ETag, last-modified time, and part of the copy state. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + pplx::task start_copy_async(const web::http::uri& source, const cloud_metadata& metadata, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + return start_copy_async_impl(source, premium_blob_tier::unknown, metadata, source_condition, destination_condition, options, context, cancellation_token); + } + + /// + /// Initiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. + /// + /// The URI of a source blob. + /// An object that represents the for the source blob. + /// An object that represents the for the destination blob. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + /// + /// This method fetches the blob's ETag, last-modified time, and part of the copy state. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + pplx::task start_copy_async(const cloud_blob& source, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context) + { + return start_copy_async(source, source_condition, destination_condition, options, context, pplx::cancellation_token::none()); + } + /// - /// Intitiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. + /// Initiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. /// /// The URI of a source blob. /// An object that represents the for the source blob. /// An object that represents the for the destination blob. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + /// + /// This method fetches the blob's ETag, last-modified time, and part of the copy state. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + pplx::task start_copy_async(const cloud_blob& source, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + return start_copy_async(source, cloud_metadata(), source_condition, destination_condition, options, context, cancellation_token); + } + + /// + /// Initiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. + /// + /// The URI of a source blob. + /// Metadata that will be set on the destination blob. + /// An object that represents the for the source blob. + /// An object that represents the for the destination blob. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + /// + /// This method fetches the blob's ETag, last-modified time, and part of the copy state. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + WASTORAGE_API pplx::task start_copy_async(const cloud_blob& source, const cloud_metadata& metadata, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); + + + /// + /// Initiates an asynchronous operation to begin to copy a file's contents, properties, and metadata to a new blob. + /// + /// The URI of a source file. + /// An object that represents the for the source blob. + /// An object that represents the for the destination blob. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + /// + /// This method fetches the blob's ETag, last-modified time, and part of the copy state. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + pplx::task start_copy_async(const cloud_file& source, const file_access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context) + { + return start_copy_async(source, source_condition, destination_condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to begin to copy a file's contents, properties, and metadata to a new blob. + /// + /// The URI of a source file. + /// An object that represents the for the source blob. + /// An object that represents the for the destination blob. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object of type that represents the current operation. /// /// This method fetches the blob's ETag, last-modified time, and part of the copy state. /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. /// - WASTORAGE_API pplx::task start_copy_async(const cloud_blob& source, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context); + pplx::task start_copy_async(const cloud_file& source, const file_access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + return start_copy_async(source, cloud_metadata(), source_condition, destination_condition, options, context, cancellation_token); + } /// - /// Intitiates an asynchronous operation to begin to copy a file's contents, properties, and metadata to a new blob. + /// Initiates an asynchronous operation to begin to copy a file's contents, properties, and metadata to a new blob. /// /// The URI of a source file. + /// Metadata that will be set on the destination blob. /// An object that represents the for the source blob. /// An object that represents the for the destination blob. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object of type that represents the current operation. /// /// This method fetches the blob's ETag, last-modified time, and part of the copy state. /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. /// - WASTORAGE_API pplx::task start_copy_async(const cloud_file& source, const file_access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task start_copy_async(const cloud_file& source, const cloud_metadata& metadata, const file_access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Aborts an ongoing blob copy operation. @@ -4861,7 +5901,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to abort an ongoing blob copy operation. + /// Initiates an asynchronous operation to abort an ongoing blob copy operation. /// /// A string identifying the copy operation. /// A object that represents the current operation. @@ -4871,14 +5911,28 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to abort an ongoing blob copy operation. + /// Initiates an asynchronous operation to abort an ongoing blob copy operation. + /// + /// A string identifying the copy operation. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task abort_copy_async(const utility::string_t& copy_id, const access_condition& condition, const blob_request_options& options, operation_context context) const + { + return abort_copy_async(copy_id, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to abort an ongoing blob copy operation. /// /// A string identifying the copy operation. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task abort_copy_async(const utility::string_t& copy_id, const access_condition& condition, const blob_request_options& options, operation_context context) const; + WASTORAGE_API pplx::task abort_copy_async(const utility::string_t& copy_id, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Creates a snapshot of the blob. @@ -4903,7 +5957,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to create a snapshot of the blob. + /// Initiates an asynchronous operation to create a snapshot of the blob. /// /// A object of type that represents the current operation. pplx::task create_snapshot_async() @@ -4912,14 +5966,29 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to create a snapshot of the blob. + /// Initiates an asynchronous operation to create a snapshot of the blob. + /// + /// A collection of name-value pairs defining the metadata of the snapshot. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task create_snapshot_async(cloud_metadata metadata, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return create_snapshot_async(metadata, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to create a snapshot of the blob. /// /// A collection of name-value pairs defining the metadata of the snapshot. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// An object that is used to cancel the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task create_snapshot_async(cloud_metadata metadata, const access_condition& condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task create_snapshot_async(cloud_metadata metadata, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Gets the object that represents the Blob service. @@ -4993,6 +6062,51 @@ namespace azure { namespace storage { return !m_snapshot_time.empty(); } + /// + /// Sets the version id of this blob. + /// + /// The blob's version id. + void set_version_id(utility::string_t version_id) + { + m_version_id = std::move(version_id); + + web::uri primary_uri = m_uri.primary_uri(); + web::uri secondary_uri = m_uri.secondary_uri(); + + for (auto uri : std::vector>{ primary_uri, secondary_uri }) + { + auto query = web::http::uri::split_query(uri.get().query()); + if (m_version_id.empty()) + { + query.erase(protocol::uri_query_version_id); + } + else + { + query[protocol::uri_query_version_id] = m_version_id; + } + + web::uri_builder builder(uri); + builder.set_query(utility::string_t()); + for (const auto& q : query) + { + builder.append_query(q.first, q.second); + } + + uri.get() = builder.to_uri(); + } + + m_uri = storage_uri(primary_uri, secondary_uri); + } + + /// + /// Gets the version id of the blob, if this blob refers to a version. + /// + /// The blob's version id, if the blob refers to a version; otherwise returns an empty string. + const utility::string_t& version_id() const + { + return m_version_id; + } + /// /// Gets the state of the most recent or pending copy operation. /// @@ -5059,13 +6173,35 @@ namespace azure { namespace storage { /// the state of the most recent or pending copy operation. WASTORAGE_API cloud_blob(utility::string_t name, utility::string_t snapshot_time, cloud_blob_container container, cloud_blob_properties properties, cloud_metadata metadata, azure::storage::copy_state copy_state); + /// + /// Initiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. + /// + /// The URI of a source blob. + /// An enum that represents the for the destination blob. + /// Metadata that will be set on the destination blob. + /// An object that represents the for the source blob. + /// An object that represents the for the destination blob. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + /// + /// This method fetches the blob's ETag, last-modified time, and part of the copy state. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + WASTORAGE_API pplx::task start_copy_async_impl(const web::http::uri& source, const premium_blob_tier tier, const cloud_metadata& metadata, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); + void assert_no_snapshot() const; + WASTORAGE_API pplx::task download_attributes_async_impl(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_timer = false, std::shared_ptr timer_handler = nullptr); + void set_type(blob_type value) { m_properties->set_type(value); } + utility::string_t get_premium_access_tier_string(const premium_blob_tier tier); + std::shared_ptr m_properties; std::shared_ptr m_metadata; std::shared_ptr m_copy_state; @@ -5073,17 +6209,21 @@ namespace azure { namespace storage { private: void init(utility::string_t snapshot_time, storage_credentials credentials); - WASTORAGE_API pplx::task exists_async(bool primary_only, const blob_request_options& options, operation_context context); - WASTORAGE_API pplx::task download_single_range_to_stream_async(concurrency::streams::ostream target, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, bool update_properties = false); + WASTORAGE_API pplx::task exists_async_impl(bool primary_only, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); + WASTORAGE_API pplx::task download_single_range_to_stream_async(concurrency::streams::ostream target, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, bool update_properties, const pplx::cancellation_token& cancellation_token, std::shared_ptr timer_handler = nullptr); + WASTORAGE_API pplx::task upload_properties_async_impl(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_timeout, std::shared_ptr timer_handler = nullptr); utility::string_t m_name; utility::string_t m_snapshot_time; + utility::string_t m_version_id; cloud_blob_container m_container; storage_uri m_uri; friend class cloud_blob_container; friend class cloud_blob_directory; friend class list_blob_item; + friend class core::basic_cloud_page_blob_ostreambuf; + friend class core::basic_cloud_append_blob_ostreambuf; }; /// @@ -5197,7 +6337,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to open a stream for writing to the block blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to open a stream for writing to the block blob. If the blob already exists on the service, it will be overwritten. /// /// A object of type that represents the current operation. pplx::task open_write_async() @@ -5206,7 +6346,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to open a stream for writing to the block blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to open a stream for writing to the block blob. If the blob already exists on the service, it will be overwritten. /// /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. @@ -5214,7 +6354,25 @@ namespace azure { namespace storage { /// A object of type that represents the current operation. /// To avoid overwriting and instead throw an error if the blob exists, please pass in an /// parameter generated using - WASTORAGE_API pplx::task open_write_async(const access_condition& condition, const blob_request_options& options, operation_context context); + pplx::task open_write_async(const access_condition& condition, const blob_request_options& options, operation_context context) + { + return open_write_async(condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to open a stream for writing to the block blob. If the blob already exists on the service, it will be overwritten. + /// + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + /// To avoid overwriting and instead throw an error if the blob exists, please pass in an + /// parameter generated using + pplx::task open_write_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + return open_write_async_impl(condition, options, context, cancellation_token, true); + } /// /// Returns an enumerable collection of the blob's blocks, using the specified block list filter. @@ -5240,7 +6398,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to return an enumerable collection of the blob's blocks, + /// Initiates an asynchronous operation to return an enumerable collection of the blob's blocks, /// using the specified block list filter. /// /// A object of type , of type , that represents the current operation. @@ -5250,7 +6408,22 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to return an enumerable collection of the blob's blocks, + /// Initiates an asynchronous operation to return an enumerable collection of the blob's blocks, + /// using the specified block list filter. + /// + /// One of the enumeration values that indicates whether to return + /// committed blocks, uncommitted blocks, or both. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type , of type , that represents the current operation. + pplx::task> download_block_list_async(block_listing_filter listing_filter, const access_condition& condition, const blob_request_options& options, operation_context context) const + { + return download_block_list_async(listing_filter, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to return an enumerable collection of the blob's blocks, /// using the specified block list filter. /// /// One of the enumeration values that indicates whether to return @@ -5258,8 +6431,9 @@ namespace azure { namespace storage { /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object of type , of type , that represents the current operation. - WASTORAGE_API pplx::task> download_block_list_async(block_listing_filter listing_filter, const access_condition& condition, const blob_request_options& options, operation_context context) const; + WASTORAGE_API pplx::task> download_block_list_async(block_listing_filter listing_filter, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Downloads the blob's contents as a string. @@ -5283,7 +6457,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to download the blob's contents as a string. + /// Initiates an asynchronous operation to download the blob's contents as a string. /// /// A object of type that represents the current operation. pplx::task download_text_async() @@ -5292,24 +6466,73 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to download the blob's contents as a string. + /// Initiates an asynchronous operation to download the blob's contents as a string. + /// + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task download_text_async(const access_condition& condition, const blob_request_options& options, operation_context context) + { + return download_text_async(condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to download the blob's contents as a string. + /// + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + WASTORAGE_API pplx::task download_text_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); + + /// + /// Sets standard account's blob tier. + /// + /// An enum that represents the blob tier to be set. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + void set_standard_blob_tier(const standard_blob_tier tier, const access_condition & condition, const blob_request_options & options, operation_context context) + { + set_standard_blob_tier_async(tier, condition, options, context).wait(); + } + + /// + /// Initiates an asynchronous operation to set standard account's blob tier. + /// + /// An enum that represents the blob tier to be set. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task set_standard_blob_tier_async(const standard_blob_tier tier, const access_condition & condition, const blob_request_options & options, operation_context context) + { + return set_standard_blob_tier_async(tier, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to set standard account's blob tier. /// + /// An enum that represents the blob tier to be set. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task download_text_async(const access_condition& condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task set_standard_blob_tier_async(const standard_blob_tier tier, const access_condition & condition, const blob_request_options & options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Uploads a single block. /// /// A Base64-encoded block ID that identifies the block. /// A stream that provides the data for the block. - /// An optional hash value that will be used to set the Content-MD5 property - /// on the blob. May be an empty string. - void upload_block(const utility::string_t& block_id, concurrency::streams::istream block_data, const utility::string_t& content_md5) const + /// A hash value used to ensure transactional integrity. May be or a base64-encoded MD5 string or CRC64 integer. + void upload_block(const utility::string_t& block_id, concurrency::streams::istream block_data, const checksum& content_checksum) const { - upload_block_async(block_id, block_data, content_md5).wait(); + upload_block_async(block_id, block_data, content_checksum).wait(); } /// @@ -5317,41 +6540,57 @@ namespace azure { namespace storage { /// /// A Base64-encoded block ID that identifies the block. /// A stream that provides the data for the block. - /// An optional hash value that will be used to set the Content-MD5 property - /// on the blob. May be an empty string. + /// A hash value used to ensure transactional integrity. May be or a base64-encoded MD5 string or CRC64 integer. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. - void upload_block(const utility::string_t& block_id, concurrency::streams::istream block_data, const utility::string_t& content_md5, const access_condition& condition, const blob_request_options& options, operation_context context) const + void upload_block(const utility::string_t& block_id, concurrency::streams::istream block_data, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, operation_context context) const + { + upload_block_async(block_id, block_data, content_checksum, condition, options, context).wait(); + } + + /// + /// Initiates an asynchronous operation to upload a single block. + /// + /// A Base64-encoded block ID that identifies the block. + /// A stream that provides the data for the block. + /// A hash value used to ensure transactional integrity. May be or a base64-encoded MD5 string or CRC64 integer. + /// A object that represents the current operation. + pplx::task upload_block_async(const utility::string_t& block_id, concurrency::streams::istream block_data, const checksum& content_checksum) const { - upload_block_async(block_id, block_data, content_md5, condition, options, context).wait(); + return upload_block_async(block_id, block_data, content_checksum, access_condition(), blob_request_options(), operation_context()); } /// - /// Intitiates an asynchronous operation to upload a single block. + /// Initiates an asynchronous operation to upload a single block. /// /// A Base64-encoded block ID that identifies the block. /// A stream that provides the data for the block. - /// An optional hash value that will be used to set the Content-MD5 property - /// on the blob. May be an empty string. + /// A hash value used to ensure transactional integrity. May be or a base64-encoded MD5 string or CRC64 integer. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. /// A object that represents the current operation. - pplx::task upload_block_async(const utility::string_t& block_id, concurrency::streams::istream block_data, const utility::string_t& content_md5) const + WASTORAGE_API pplx::task upload_block_async(const utility::string_t& block_id, concurrency::streams::istream block_data, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, operation_context context) const { - return upload_block_async(block_id, block_data, content_md5, access_condition(), blob_request_options(), operation_context()); + return upload_block_async(block_id, block_data, content_checksum, condition, options, context, pplx::cancellation_token::none()); } /// - /// Intitiates an asynchronous operation to upload a single block. + /// Initiates an asynchronous operation to upload a single block. /// /// A Base64-encoded block ID that identifies the block. /// A stream that provides the data for the block. - /// An optional hash value that will be used to set the Content-MD5 property - /// on the blob. May be an empty string. + /// A hash value used to ensure transactional integrity. May be or a base64-encoded MD5 string or CRC64 integer. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task upload_block_async(const utility::string_t& block_id, concurrency::streams::istream block_data, const utility::string_t& content_md5, const access_condition& condition, const blob_request_options& options, operation_context context) const; + pplx::task upload_block_async(const utility::string_t& block_id, concurrency::streams::istream block_data, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const + { + return upload_block_async_impl(block_id, block_data, content_checksum, condition, options, context, cancellation_token, true); + } /// /// Uploads a list of blocks to a new or existing blob. @@ -5375,7 +6614,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a list of blocks to a new or existing blob. + /// Initiates an asynchronous operation to upload a list of blocks to a new or existing blob. /// /// An enumerable collection of block IDs, as Base64-encoded strings. /// A object that represents the current operation. @@ -5385,14 +6624,31 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a list of blocks to a new or existing blob. + /// Initiates an asynchronous operation to upload a list of blocks to a new or existing blob. + /// + /// An enumerable collection of block IDs, as Base64-encoded strings. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task upload_block_list_async(const std::vector& block_list, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return upload_block_list_async(block_list, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to upload a list of blocks to a new or existing blob. /// /// An enumerable collection of block IDs, as Base64-encoded strings. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task upload_block_list_async(const std::vector& block_list, const access_condition& condition, const blob_request_options& options, operation_context context); + pplx::task upload_block_list_async(const std::vector& block_list, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + return upload_block_list_async_impl(block_list, condition, options, context, cancellation_token, true); + } /// /// Uploads a stream to a block blob. If the blob already exists on the service, it will be overwritten. @@ -5439,7 +6695,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a stream to a block blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to upload a stream to a block blob. If the blob already exists on the service, it will be overwritten. /// /// The stream providing the blob content. /// A object that represents the current operation. @@ -5449,7 +6705,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a stream to a block blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to upload a stream to a block blob. If the blob already exists on the service, it will be overwritten. /// /// The stream providing the blob content. /// An object that represents the access condition for the operation. @@ -5462,7 +6718,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a stream to a block blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to upload a stream to a block blob. If the blob already exists on the service, it will be overwritten. /// /// The stream providing the blob content. /// The number of bytes to write from the source stream at its current position. @@ -5473,7 +6729,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a stream to a block blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to upload a stream to a block blob. If the blob already exists on the service, it will be overwritten. /// /// The stream providing the blob content. /// The number of bytes to write from the source stream at its current position. @@ -5481,8 +6737,23 @@ namespace azure { namespace storage { /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task upload_from_stream_async(concurrency::streams::istream source, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context); + pplx::task upload_from_stream_async(concurrency::streams::istream source, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return upload_from_stream_async(source, length, condition, options, context, pplx::cancellation_token::none()); + } + /// + /// Initiates an asynchronous operation to upload a stream to a block blob. If the blob already exists on the service, it will be overwritten. + /// + /// The stream providing the blob content. + /// The number of bytes to write from the source stream at its current position. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + WASTORAGE_API pplx::task upload_from_stream_async(concurrency::streams::istream source, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); + /// /// Uploads a file to a block blob. If the blob already exists on the service, it will be overwritten. /// @@ -5505,7 +6776,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a file to a block blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to upload a file to a block blob. If the blob already exists on the service, it will be overwritten. /// /// The file providing the blob content. /// A object that represents the current operation. @@ -5515,14 +6786,28 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a file to a block blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to upload a file to a block blob. If the blob already exists on the service, it will be overwritten. + /// + /// The file providing the blob content. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task upload_from_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return upload_from_file_async(path, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to upload a file to a block blob. If the blob already exists on the service, it will be overwritten. /// /// The file providing the blob content. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task upload_from_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task upload_from_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Uploads a string of text to a blob. If the blob already exists on the service, it will be overwritten. @@ -5563,14 +6848,28 @@ namespace azure { namespace storage { /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task upload_text_async(const utility::string_t& content, const access_condition& condition, const blob_request_options& options, operation_context context); - - private: + pplx::task upload_text_async(const utility::string_t& content, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return upload_text_async(content, condition, options, context, pplx::cancellation_token::none()); + } /// - /// Initializes a new instance of the class. + /// Uploads a string of text to a blob. If the blob already exists on the service, it will be overwritten. /// - /// The name of the blob. + /// A string containing the text to upload. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + WASTORAGE_API pplx::task upload_text_async(const utility::string_t& content, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); + + private: + + /// + /// Initializes a new instance of the class. + /// + /// The name of the blob. /// The snapshot timestamp, if the blob is a snapshot. /// An object. cloud_block_blob(utility::string_t name, utility::string_t snapshot_time, cloud_blob_container container) @@ -5579,9 +6878,14 @@ namespace azure { namespace storage { set_type(blob_type::block_blob); } + WASTORAGE_API pplx::task open_write_async_impl(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_request_level_timeout = false, std::shared_ptr timer_handler = nullptr); + WASTORAGE_API pplx::task upload_block_async_impl(const utility::string_t& block_id, concurrency::streams::istream block_data, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_timeout, std::shared_ptr timer_handler = nullptr) const; + WASTORAGE_API pplx::task upload_block_list_async_impl(const std::vector& block_list, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_timeout, std::shared_ptr timer_handler = nullptr); + friend class cloud_blob_container; friend class cloud_blob_directory; - }; + friend class core::basic_cloud_block_blob_ostreambuf; +}; /// /// Represents a Windows Azure page blob. @@ -5716,7 +7020,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to open a stream for writing to an existing page blob. + /// Initiates an asynchronous operation to open a stream for writing to an existing page blob. /// /// A object of type that represents the current operation. pplx::task open_write_async() @@ -5725,16 +7029,29 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to open a stream for writing to an existing page blob. + /// Initiates an asynchronous operation to open a stream for writing to an existing page blob. /// /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task open_write_async(const access_condition& condition, const blob_request_options& options, operation_context context); + pplx::task open_write_async(const access_condition& condition, const blob_request_options& options, operation_context context) + { + return open_write_async(condition, options, context, pplx::cancellation_token::none()); + } /// - /// Intitiates an asynchronous operation to open a stream for writing to a new page blob. + /// Initiates an asynchronous operation to open a stream for writing to an existing page blob. + /// + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + WASTORAGE_API pplx::task open_write_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); + + /// + /// Initiates an asynchronous operation to open a stream for writing to a new page blob. /// /// The size of the write operation, in bytes. The size must be a multiple of 512. /// A object of type that represents the current operation. @@ -5744,15 +7061,34 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to open a stream for writing to a new page blob. + /// Initiates an asynchronous operation to open a stream for writing to a new page blob. + /// + /// The size of the write operation, in bytes. The size must be a multiple of 512. + /// A user-controlled number to track request sequence, whose value must be between 0 and 2^63 - 1. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + pplx::task open_write_async(utility::size64_t size, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return open_write_async(size, sequence_number, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to open a stream for writing to a new page blob. /// /// The size of the write operation, in bytes. The size must be a multiple of 512. /// A user-controlled number to track request sequence, whose value must be between 0 and 2^63 - 1. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task open_write_async(utility::size64_t size, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context); + pplx::task open_write_async(utility::size64_t size, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + return open_write_async_impl(size, sequence_number, condition, options, context, cancellation_token, true); + } /// /// Clears pages from a page blob. @@ -5778,7 +7114,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to clear pages from a page blob. + /// Initiates an asynchronous operation to clear pages from a page blob. /// /// The offset at which to begin clearing pages, in bytes. The offset must be a multiple of 512. /// The length of the data range to be cleared, in bytes. The length must be a multiple of 512. @@ -5789,7 +7125,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to clear pages from a page blob. + /// Initiates an asynchronous operation to clear pages from a page blob. /// /// The offset at which to begin clearing pages, in bytes. The offset must be a multiple of 512. /// The length of the data range to be cleared, in bytes. The length must be a multiple of 512. @@ -5797,7 +7133,22 @@ namespace azure { namespace storage { /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task clear_pages_async(int64_t start_offset, int64_t length, const access_condition& condition, const blob_request_options& options, operation_context context); + pplx::task clear_pages_async(int64_t start_offset, int64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return clear_pages_async(start_offset, length, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to clear pages from a page blob. + /// + /// The offset at which to begin clearing pages, in bytes. The offset must be a multiple of 512. + /// The length of the data range to be cleared, in bytes. The length must be a multiple of 512. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + WASTORAGE_API pplx::task clear_pages_async(int64_t start_offset, int64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Gets a collection of valid page ranges and their starting and ending bytes. @@ -5846,7 +7197,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes. + /// Initiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes. /// /// A object of type , of type , that represents the current operation. pplx::task> download_page_ranges_async() const @@ -5855,7 +7206,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes. + /// Initiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes. /// /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. @@ -5867,7 +7218,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes. + /// Initiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes. /// /// The starting offset of the data range over which to list page ranges, in bytes. Must be a multiple of 512. /// The length of the data range over which to list page ranges, in bytes. Must be a multiple of 512. @@ -5878,7 +7229,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes. + /// Initiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes. /// /// The starting offset of the data range over which to list page ranges, in bytes. Must be a multiple of 512. /// The length of the data range over which to list page ranges, in bytes. Must be a multiple of 512. @@ -5886,14 +7237,29 @@ namespace azure { namespace storage { /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. /// A object of type , of type , that represents the current operation. - WASTORAGE_API pplx::task> download_page_ranges_async(utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) const; + pplx::task> download_page_ranges_async(utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) const + { + return download_page_ranges_async(offset, length, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes. + /// + /// The starting offset of the data range over which to list page ranges, in bytes. Must be a multiple of 512. + /// The length of the data range over which to list page ranges, in bytes. Must be a multiple of 512. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type , of type , that represents the current operation. + WASTORAGE_API pplx::task> download_page_ranges_async(utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; /// /// Gets a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. /// - /// An snapshot time that represents previous snapshot. + /// A snapshot time string that represents previous snapshot. /// An enumerable collection of page diff ranges. - std::vector download_page_ranges_diff(utility::string_t previous_snapshot_time) const + std::vector download_page_ranges_diff(const utility::string_t& previous_snapshot_time) const { return download_page_ranges_diff_async(previous_snapshot_time).get(); } @@ -5901,12 +7267,12 @@ namespace azure { namespace storage { /// /// Gets a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. /// - /// An snapshot time that represents previous snapshot. + /// A snapshot time string that represents previous snapshot. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. /// An enumerable collection of page diff ranges. - std::vector download_page_ranges_diff(utility::string_t previous_snapshot_time, const access_condition& condition, const blob_request_options& options, operation_context context) const + std::vector download_page_ranges_diff(const utility::string_t& previous_snapshot_time, const access_condition& condition, const blob_request_options& options, operation_context context) const { return download_page_ranges_diff_async(previous_snapshot_time, condition, options, context).get(); } @@ -5914,11 +7280,11 @@ namespace azure { namespace storage { /// /// Gets a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. /// - /// An snapshot time that represents previous snapshot. + /// A snapshot time string that represents previous snapshot. /// The starting offset of the data range over which to list page ranges, in bytes. Must be a multiple of 512. /// The length of the data range over which to list page ranges, in bytes. Must be a multiple of 512. /// An enumerable collection of page diff ranges. - std::vector download_page_ranges_diff(utility::string_t previous_snapshot_time, utility::size64_t offset, utility::size64_t length) const + std::vector download_page_ranges_diff(const utility::string_t& previous_snapshot_time, utility::size64_t offset, utility::size64_t length) const { return download_page_ranges_diff_async(previous_snapshot_time, offset, length).get(); } @@ -5926,76 +7292,245 @@ namespace azure { namespace storage { /// /// Gets a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. /// - /// An snapshot time that represents previous snapshot. + /// A snapshot time string that represents previous snapshot. /// The starting offset of the data range over which to list page ranges, in bytes. Must be a multiple of 512. /// The length of the data range over which to list page ranges, in bytes. Must be a multiple of 512. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. /// An enumerable collection of page diff ranges. - std::vector download_page_ranges_diff(utility::string_t previous_snapshot_time, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) const + std::vector download_page_ranges_diff(const utility::string_t& previous_snapshot_time, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) const { return download_page_ranges_diff_async(previous_snapshot_time, offset, length, condition, options, context).get(); } /// - /// Intitiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. + /// Initiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. /// - /// An snapshot time that represents previous snapshot. + /// A snapshot time string that represents previous snapshot. /// A object of type , of type , that represents the current operation. - pplx::task> download_page_ranges_diff_async(utility::string_t previous_snapshot_time) const + pplx::task> download_page_ranges_diff_async(const utility::string_t& previous_snapshot_time) const { return download_page_ranges_diff_async(previous_snapshot_time, access_condition(), blob_request_options(), operation_context()); } /// - /// Intitiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. + /// Initiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. /// - /// An snapshot time that represents previous snapshot. + /// A snapshot time string that represents previous snapshot. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. /// A object of type , of type , that represents the current operation. - pplx::task> download_page_ranges_diff_async(utility::string_t previous_snapshot_time, const access_condition& condition, const blob_request_options& options, operation_context context) const + pplx::task> download_page_ranges_diff_async(const utility::string_t& previous_snapshot_time, const access_condition& condition, const blob_request_options& options, operation_context context) const { return download_page_ranges_diff_async(previous_snapshot_time, std::numeric_limits::max(), 0, condition, options, context); } /// - /// Intitiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. + /// Initiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. /// - /// An snapshot time that represents previous snapshot. + /// A snapshot time string that represents previous snapshot. /// The starting offset of the data range over which to list page ranges, in bytes. Must be a multiple of 512. /// The length of the data range over which to list page ranges, in bytes. Must be a multiple of 512. /// A object of type , of type , that represents the current operation. - pplx::task> download_page_ranges_diff_async(utility::string_t previous_snapshot_time, utility::size64_t offset, utility::size64_t length) const + pplx::task> download_page_ranges_diff_async(const utility::string_t& previous_snapshot_time, utility::size64_t offset, utility::size64_t length) const { return download_page_ranges_diff_async(previous_snapshot_time, offset, length, access_condition(), blob_request_options(), operation_context()); } /// - /// Intitiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. + /// Initiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. + /// + /// A snapshot time string that represents previous snapshot. + /// The starting offset of the data range over which to list page ranges, in bytes. Must be a multiple of 512. + /// The length of the data range over which to list page ranges, in bytes. Must be a multiple of 512. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type , of type , that represents the current operation. + pplx::task> download_page_ranges_diff_async(const utility::string_t& previous_snapshot_time, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) const + { + return download_page_ranges_diff_async(previous_snapshot_time, offset, length, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. + /// + /// A snapshot time string that represents previous snapshot. + /// The starting offset of the data range over which to list page ranges, in bytes. Must be a multiple of 512. + /// The length of the data range over which to list page ranges, in bytes. Must be a multiple of 512. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type , of type , that represents the current operation. + pplx::task> download_page_ranges_diff_async(const utility::string_t& previous_snapshot_time, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const + { + return download_page_ranges_diff_async_impl(previous_snapshot_time, utility::string_t(), offset, length, condition, options, context, cancellation_token); + } + + /// + /// Gets a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. + /// + /// A snapshot URL string that represents the previous snapshot. that represents previous snapshot. + /// An enumerable collection of page diff ranges. + /// + /// This API can be only called against the incremental snapshots of Managed Disks that belong to the same snapshot family. Please browse following URI for more information: + /// https://aka.ms/mdincrementalsnapshots + /// + std::vector download_page_ranges_diff_md(const utility::string_t& previous_snapshot_url) const + { + return download_page_ranges_diff_md_async(previous_snapshot_url).get(); + } + + /// + /// Gets a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. + /// + /// A snapshot URL string that represents the previous snapshot. that represents previous snapshot. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An enumerable collection of page diff ranges. + /// + /// This API can be only called against the incremental snapshots of Managed Disks that belong to the same snapshot family. Please browse following URI for more information: + /// https://aka.ms/mdincrementalsnapshots + /// + std::vector download_page_ranges_diff_md(const utility::string_t& previous_snapshot_url, const access_condition& condition, const blob_request_options& options, operation_context context) const + { + return download_page_ranges_diff_md_async(previous_snapshot_url, condition, options, context).get(); + } + + /// + /// Gets a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. + /// + /// A snapshot URL string that represents the previous snapshot. that represents previous snapshot. + /// The starting offset of the data range over which to list page ranges, in bytes. Must be a multiple of 512. + /// The length of the data range over which to list page ranges, in bytes. Must be a multiple of 512. + /// An enumerable collection of page diff ranges. + /// + /// This API can be only called against the incremental snapshots of Managed Disks that belong to the same snapshot family. Please browse following URI for more information: + /// https://aka.ms/mdincrementalsnapshots + /// + std::vector download_page_ranges_diff_md(const utility::string_t& previous_snapshot_url, utility::size64_t offset, utility::size64_t length) const + { + return download_page_ranges_diff_md_async(previous_snapshot_url, offset, length).get(); + } + + /// + /// Gets a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. /// - /// An snapshot time that represents previous snapshot. + /// A snapshot URL string that represents the previous snapshot. that represents previous snapshot. /// The starting offset of the data range over which to list page ranges, in bytes. Must be a multiple of 512. /// The length of the data range over which to list page ranges, in bytes. Must be a multiple of 512. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An enumerable collection of page diff ranges. + /// + /// This API can be only called against the incremental snapshots of Managed Disks that belong to the same snapshot family. Please browse following URI for more information: + /// https://aka.ms/mdincrementalsnapshots + /// + std::vector download_page_ranges_diff_md(const utility::string_t& previous_snapshot_url, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) const + { + return download_page_ranges_diff_md_async(previous_snapshot_url, offset, length, condition, options, context).get(); + } + + /// + /// Initiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. + /// + /// A snapshot URL string that represents the previous snapshot. that represents previous snapshot. + /// A object of type , of type , that represents the current operation. + /// + /// This API can be only called against the incremental snapshots of Managed Disks that belong to the same snapshot family. Please browse following URI for more information: + /// https://aka.ms/mdincrementalsnapshots + /// + pplx::task> download_page_ranges_diff_md_async(const utility::string_t& previous_snapshot_url) const + { + return download_page_ranges_diff_md_async(previous_snapshot_url, access_condition(), blob_request_options(), operation_context()); + } + + /// + /// Initiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. + /// + /// A snapshot URL string that represents the previous snapshot. that represents previous snapshot. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type , of type , that represents the current operation. + /// + /// This API can be only called against the incremental snapshots of Managed Disks that belong to the same snapshot family. Please browse following URI for more information: + /// https://aka.ms/mdincrementalsnapshots + /// + pplx::task> download_page_ranges_diff_md_async(const utility::string_t& previous_snapshot_url, const access_condition& condition, const blob_request_options& options, operation_context context) const + { + return download_page_ranges_diff_md_async(previous_snapshot_url, std::numeric_limits::max(), 0, condition, options, context); + } + + /// + /// Initiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. + /// + /// A snapshot URL string that represents the previous snapshot. that represents previous snapshot. + /// The starting offset of the data range over which to list page ranges, in bytes. Must be a multiple of 512. + /// The length of the data range over which to list page ranges, in bytes. Must be a multiple of 512. /// A object of type , of type , that represents the current operation. - WASTORAGE_API pplx::task> download_page_ranges_diff_async(utility::string_t previous_snapshot_time, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) const; + /// + /// This API can be only called against the incremental snapshots of Managed Disks that belong to the same snapshot family. Please browse following URI for more information: + /// https://aka.ms/mdincrementalsnapshots + /// + pplx::task> download_page_ranges_diff_md_async(const utility::string_t& previous_snapshot_url, utility::size64_t offset, utility::size64_t length) const + { + return download_page_ranges_diff_md_async(previous_snapshot_url, offset, length, access_condition(), blob_request_options(), operation_context()); + } + /// + /// Initiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. + /// + /// A snapshot URL string that represents the previous snapshot. that represents previous snapshot. + /// The starting offset of the data range over which to list page ranges, in bytes. Must be a multiple of 512. + /// The length of the data range over which to list page ranges, in bytes. Must be a multiple of 512. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type , of type , that represents the current operation. + /// + /// This API can be only called against the incremental snapshots of Managed Disks that belong to the same snapshot family. Please browse following URI for more information: + /// https://aka.ms/mdincrementalsnapshots + /// + pplx::task> download_page_ranges_diff_md_async(const utility::string_t& previous_snapshot_url, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) const + { + return download_page_ranges_diff_md_async(previous_snapshot_url, offset, length, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to get a collection of valid page ranges and their starting and ending bytes, only pages that were changed between target blob and previous snapshot. + /// + /// A snapshot URL string that represents the previous snapshot. that represents previous snapshot. + /// The starting offset of the data range over which to list page ranges, in bytes. Must be a multiple of 512. + /// The length of the data range over which to list page ranges, in bytes. Must be a multiple of 512. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type , of type , that represents the current operation. + /// + /// This API can be only called against the incremental snapshots of Managed Disks that belong to the same snapshot family. Please browse following URI for more information: + /// https://aka.ms/mdincrementalsnapshots + /// + pplx::task> download_page_ranges_diff_md_async(const utility::string_t& previous_snapshot_url, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const + { + return download_page_ranges_diff_async_impl(utility::string_t(), previous_snapshot_url, offset, length, condition, options, context, cancellation_token); + } /// /// Writes pages to a page blob. /// /// A stream providing the page data. /// The offset at which to begin writing, in bytes. The offset must be a multiple of 512. - /// An optional hash value that will be used to set the Content-MD5 property - /// on the blob. May be an empty string. - void upload_pages(concurrency::streams::istream page_data, int64_t start_offset, const utility::string_t& content_md5) + /// A hash value used to ensure transactional integrity. May be or a base64-encoded MD5 string or CRC64 integer. + void upload_pages(concurrency::streams::istream page_data, int64_t start_offset, const checksum& content_checksum) { - upload_pages_async(page_data, start_offset, content_md5).wait(); + upload_pages_async(page_data, start_offset, content_checksum).wait(); } /// @@ -6003,41 +7538,57 @@ namespace azure { namespace storage { /// /// A stream providing the page data. /// The offset at which to begin writing, in bytes. The offset must be a multiple of 512. - /// An optional hash value that will be used to set the Content-MD5 property - /// on the blob. May be an empty string. + /// A hash value used to ensure transactional integrity. May be or a base64-encoded MD5 string or CRC64 integer. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. - void upload_pages(concurrency::streams::istream page_data, int64_t start_offset, const utility::string_t& content_md5, const access_condition& condition, const blob_request_options& options, operation_context context) + void upload_pages(concurrency::streams::istream page_data, int64_t start_offset, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, operation_context context) { - upload_pages_async(page_data, start_offset, content_md5, condition, options, context).wait(); + upload_pages_async(page_data, start_offset, content_checksum, condition, options, context).wait(); } /// - /// Intitiates an asynchronous operation to write pages to a page blob. + /// Initiates an asynchronous operation to write pages to a page blob. /// /// A stream providing the page data. /// The offset at which to begin writing, in bytes. The offset must be a multiple of 512. - /// An optional hash value that will be used to set the Content-MD5 property - /// on the blob. May be an empty string. + /// A hash value used to ensure transactional integrity. May be or a base64-encoded MD5 string or CRC64 integer. /// A object that represents the current operation. - pplx::task upload_pages_async(concurrency::streams::istream source, int64_t start_offset, const utility::string_t& content_md5) + pplx::task upload_pages_async(concurrency::streams::istream source, int64_t start_offset, const checksum& content_checksum) { - return upload_pages_async(source, start_offset, content_md5, access_condition(), blob_request_options(), operation_context()); + return upload_pages_async(source, start_offset, content_checksum, access_condition(), blob_request_options(), operation_context()); } /// - /// Intitiates an asynchronous operation to write pages to a page blob. + /// Initiates an asynchronous operation to write pages to a page blob. /// /// A stream providing the page data. /// The offset at which to begin writing, in bytes. The offset must be a multiple of 512. - /// An optional hash value that will be used to set the Content-MD5 property - /// on the blob. May be an empty string. + /// A hash value used to ensure transactional integrity. May be or a base64-encoded MD5 string or CRC64 integer. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task upload_pages_async(concurrency::streams::istream source, int64_t start_offset, const utility::string_t& content_md5, const access_condition& condition, const blob_request_options& options, operation_context context); + pplx::task upload_pages_async(concurrency::streams::istream source, int64_t start_offset, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return upload_pages_async(source, start_offset, content_checksum, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to write pages to a page blob. + /// + /// A stream providing the page data. + /// The offset at which to begin writing, in bytes. The offset must be a multiple of 512. + /// A hash value used to ensure transactional integrity. May be or a base64-encoded MD5 string or CRC64 integer. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + pplx::task upload_pages_async(concurrency::streams::istream source, int64_t start_offset, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + return upload_pages_async_impl(source, start_offset, content_checksum, condition, options, context, cancellation_token, true); + } /// /// Uploads a stream to a page blob. If the blob already exists on the service, it will be overwritten. @@ -6086,7 +7637,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a stream to a page blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to upload a stream to a page blob. If the blob already exists on the service, it will be overwritten. /// /// The stream providing the blob content. /// A object that represents the current operation. @@ -6096,7 +7647,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a stream to a page blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to upload a stream to a page blob. If the blob already exists on the service, it will be overwritten. /// /// The stream providing the blob content. /// A user-controlled number to track request sequence, whose value must be between 0 and 2^63 - 1. @@ -6110,7 +7661,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a stream to a page blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to upload a stream to a page blob. If the blob already exists on the service, it will be overwritten. /// /// The stream providing the blob content. /// The number of bytes to write from the source stream at its current position. @@ -6121,7 +7672,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a stream to a page blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to upload a stream to a page blob. If the blob already exists on the service, it will be overwritten. /// /// The stream providing the blob content. /// The number of bytes to write from the source stream at its current position. @@ -6130,7 +7681,23 @@ namespace azure { namespace storage { /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task upload_from_stream_async(concurrency::streams::istream source, utility::size64_t length, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context); + pplx::task upload_from_stream_async(concurrency::streams::istream source, utility::size64_t length, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return upload_from_stream_async(source, length, sequence_number, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to upload a stream to a page blob. If the blob already exists on the service, it will be overwritten. + /// + /// The stream providing the blob content. + /// The number of bytes to write from the source stream at its current position. + /// A user-controlled number to track request sequence, whose value must be between 0 and 2^63 - 1. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + WASTORAGE_API pplx::task upload_from_stream_async(concurrency::streams::istream source, utility::size64_t length, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Uploads a file to a page blob. If the blob already exists on the service, it will be overwritten. @@ -6155,7 +7722,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a file to a page blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to upload a file to a page blob. If the blob already exists on the service, it will be overwritten. /// /// The file providing the blob content. /// A object that represents the current operation. @@ -6165,15 +7732,30 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a file to a page blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to upload a file to a page blob. If the blob already exists on the service, it will be overwritten. + /// + /// The file providing the blob content. + /// A user-controlled number to track request sequence, whose value must be between 0 and 2^63 - 1. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task upload_from_file_async(const utility::string_t &path, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return upload_from_file_async(path, sequence_number, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to upload a file to a page blob. If the blob already exists on the service, it will be overwritten. /// /// The file providing the blob content. /// A user-controlled number to track request sequence, whose value must be between 0 and 2^63 - 1. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task upload_from_file_async(const utility::string_t &path, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task upload_from_file_async(const utility::string_t &path, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Creates a page blob. @@ -6198,108 +7780,471 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to create a page blob. + /// Creates a page blob. + /// + /// The maximum size of the page blob, in bytes. + /// A enum that represents the tier of the page blob to be created. + /// A user-controlled number to track request sequence, whose value must be between 0 and 2^63 - 1. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + void create(utility::size64_t size, const premium_blob_tier tier, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context) + { + create_async(size, tier, sequence_number, condition, options, context).wait(); + } + + /// + /// Initiates an asynchronous operation to create a page blob. + /// + /// The maximum size of the page blob, in bytes. + /// A object that represents the current operation. + pplx::task create_async(utility::size64_t size) + { + return create_async(size, 0, access_condition(), blob_request_options(), operation_context()); + } + + /// + /// Initiates an asynchronous operation to create a page blob. + /// + /// The maximum size of the page blob, in bytes. + /// A user-controlled number to track request sequence, whose value must be between 0 and 2^63 - 1. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task create_async(utility::size64_t size, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return create_async(size, premium_blob_tier::unknown, sequence_number, condition, options, context); + } + + /// + /// Initiates an asynchronous operation to create a page blob. + /// + /// The maximum size of the page blob, in bytes. + /// A object that represents the tier of the page blob to be created. + /// A user-controlled number to track request sequence, whose value must be between 0 and 2^63 - 1. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task create_async(utility::size64_t size, const premium_blob_tier tier, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return create_async(size, tier, sequence_number, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to create a page blob. + /// + /// The maximum size of the page blob, in bytes. + /// A object that represents the tier of the page blob to be created. + /// A user-controlled number to track request sequence, whose value must be between 0 and 2^63 - 1. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + WASTORAGE_API pplx::task create_async(utility::size64_t size, const premium_blob_tier tier, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); + + /// + /// Resizes the page blob to the specified size. + /// + /// The size of the page blob, in bytes. + void resize(utility::size64_t size) + { + resize_async(size).wait(); + } + + /// + /// Resizes the page blob to the specified size. + /// + /// The size of the page blob, in bytes. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + void resize(utility::size64_t size, const access_condition& condition, const blob_request_options& options, operation_context context) + { + resize_async(size, condition, options, context).wait(); + } + + /// + /// Initiates an asynchronous operation to resize the page blob to the specified size. + /// + /// The size of the page blob, in bytes. + /// A object that represents the current operation. + pplx::task resize_async(utility::size64_t size) + { + return resize_async(size, access_condition(), blob_request_options(), operation_context()); + } + + /// + /// Initiates an asynchronous operation to resize the page blob to the specified size. + /// + /// The size of the page blob, in bytes. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task resize_async(utility::size64_t size, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return resize_async(size, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to resize the page blob to the specified size. + /// + /// The size of the page blob, in bytes. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + WASTORAGE_API pplx::task resize_async(utility::size64_t size, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); + + /// + /// Sets the page blob's sequence number. + /// + /// A value of type , indicating the operation to perform on the sequence number. + void set_sequence_number(const azure::storage::sequence_number& sequence_number) + { + set_sequence_number_async(sequence_number).wait(); + } + + /// + /// Sets the page blob's sequence number. + /// + /// A value of type , indicating the operation to perform on the sequence number. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + void set_sequence_number(const azure::storage::sequence_number& sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context) + { + set_sequence_number_async(sequence_number, condition, options, context).wait(); + } + + /// + /// Initiates an asynchronous operation to set the page blob's sequence number. + /// + /// A value of type , indicating the operation to perform on the sequence number. + /// A object that represents the current operation. + pplx::task set_sequence_number_async(const azure::storage::sequence_number& sequence_number) + { + return set_sequence_number_async(sequence_number, access_condition(), blob_request_options(), operation_context()); + } + + /// + /// Initiates an asynchronous operation to set the page blob's sequence number. + /// + /// A value of type , indicating the operation to perform on the sequence number. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + pplx::task set_sequence_number_async(const azure::storage::sequence_number& sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return set_sequence_number_async(sequence_number, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to set the page blob's sequence number. + /// + /// A value of type , indicating the operation to perform on the sequence number. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + WASTORAGE_API pplx::task set_sequence_number_async(const azure::storage::sequence_number& sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); + + /// + /// Begin to copy a snapshot of the source page blob and metadata to a destination page blob. + /// + /// The source page blob object specified a snapshot. + /// The copy ID associated with the incremental copy operation. + /// + /// The destination of an incremental copy must either not exist, or must have been created with a previous incremental copy from the same source blob. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + utility::string_t start_incremental_copy(const cloud_page_blob& source) + { + return start_incremental_copy_async(source).get(); + } + + /// + /// Begin to copy a snapshot of the source page blob and metadata to a destination page blob. + /// + /// The URI of a snapshot of source page blob. + /// The copy ID associated with the incremental copy operation. + /// + /// The destination of an incremental copy must either not exist, or must have been created with a previous incremental copy from the same source blob. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + utility::string_t start_incremental_copy(const web::http::uri& source) + { + return start_incremental_copy_async(source).get(); + } + + /// + /// Begin to copy a snapshot of the source page blob and metadata to a destination page blob. + /// + /// The source page blob object specified a snapshot. + /// An object that represents the for the destination blob. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// The copy ID associated with the incremental copy operation. + /// + /// The destination of an incremental copy must either not exist, or must have been created with a previous incremental copy from the same source blob. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + utility::string_t start_incremental_copy(const cloud_page_blob& source, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return start_incremental_copy_async(source, condition, options, context).get(); + } + + /// + /// Begin to copy a snapshot of the source page blob and metadata to a destination page blob. + /// + /// The URI of a snapshot of source page blob. + /// An object that represents the for the destination blob. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// The copy ID associated with the incremental copy operation. + /// + /// The destination of an incremental copy must either not exist, or must have been created with a previous incremental copy from the same source blob. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + utility::string_t start_incremental_copy(const web::http::uri& source, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return start_incremental_copy_async(source, condition, options, context).get(); + } + + /// + /// Initiates an asynchronous operation to begin to copy a snapshot of the source page blob and metadata to a destination page blob. + /// + /// The source page blob object specified a snapshot. + /// A object of type that represents the current operation. + /// + /// The destination of an incremental copy must either not exist, or must have been created with a previous incremental copy from the same source blob. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + pplx::task start_incremental_copy_async(const cloud_page_blob& source) + { + return start_incremental_copy_async(source, access_condition(), blob_request_options(), operation_context()); + } + + /// + /// Initiates an asynchronous operation to begin to copy a snapshot of the source page blob and metadata to a destination page blob. + /// + /// The URI of a snapshot of source page blob. + /// A object of type that represents the current operation. + /// + /// The destination of an incremental copy must either not exist, or must have been created with a previous incremental copy from the same source blob. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + pplx::task start_incremental_copy_async(const web::http::uri& source) + { + return start_incremental_copy_async(source, access_condition(), blob_request_options(), operation_context()); + } + + /// + /// Initiates an asynchronous operation to begin to copy a snapshot of the source page blob and metadata to a destination page blob. + /// + /// The source page blob object specified a snapshot. + /// An object that represents the for the destination blob. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + /// + /// The destination of an incremental copy must either not exist, or must have been created with a previous incremental copy from the same source blob. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + pplx::task start_incremental_copy_async(const cloud_page_blob& source, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return start_incremental_copy_async(source, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to begin to copy a snapshot of the source page blob and metadata to a destination page blob. + /// + /// The source page blob object specified a snapshot. + /// An object that represents the for the destination blob. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + /// + /// The destination of an incremental copy must either not exist, or must have been created with a previous incremental copy from the same source blob. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + WASTORAGE_API pplx::task start_incremental_copy_async(const cloud_page_blob& source, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); + + /// + /// Initiates an asynchronous operation to begin to copy a snapshot of the source page blob and metadata to a destination page blob. /// - /// The maximum size of the page blob, in bytes. - /// A object that represents the current operation. - pplx::task create_async(utility::size64_t size) + /// The URI of a snapshot of source page blob. + /// An object that represents the for the destination blob. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + /// + /// The destination of an incremental copy must either not exist, or must have been created with a previous incremental copy from the same source blob. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + pplx::task start_incremental_copy_async(const web::http::uri& source, const access_condition& condition, const blob_request_options& options, operation_context context) { - return create_async(size, 0, access_condition(), blob_request_options(), operation_context()); + return start_incremental_copy_async(source, condition, options, context, pplx::cancellation_token::none()); } /// - /// Intitiates an asynchronous operation to create a page blob. + /// Initiates an asynchronous operation to begin to copy a snapshot of the source page blob and metadata to a destination page blob. /// - /// The maximum size of the page blob, in bytes. - /// A user-controlled number to track request sequence, whose value must be between 0 and 2^63 - 1. - /// An object that represents the access condition for the operation. + /// The URI of a snapshot of source page blob. + /// An object that represents the for the destination blob. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. - /// A object that represents the current operation. - WASTORAGE_API pplx::task create_async(utility::size64_t size, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context); + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + /// + /// The destination of an incremental copy must either not exist, or must have been created with a previous incremental copy from the same source blob. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + WASTORAGE_API pplx::task start_incremental_copy_async(const web::http::uri& source, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// - /// Resizes the page blob to the specified size. + /// Sets premium account's page blob tier. /// - /// The size of the page blob, in bytes. - void resize(utility::size64_t size) + /// An enum that represents the blob tier to be set. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + void set_premium_blob_tier(const premium_blob_tier tier, const access_condition & condition, const blob_request_options & options, operation_context context) { - resize_async(size).wait(); + set_premium_blob_tier_async(tier, condition, options, context).wait(); } /// - /// Resizes the page blob to the specified size. + /// Initiates an asynchronous operation to set premium account's blob tier. /// - /// The size of the page blob, in bytes. + /// An enum that represents the blob tier to be set. /// An object that represents the access condition for the operation. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. - void resize(utility::size64_t size, const access_condition& condition, const blob_request_options& options, operation_context context) + /// A object of type that represents the current operation. + pplx::task set_premium_blob_tier_async(const premium_blob_tier tier, const access_condition & condition, const blob_request_options & options, operation_context context) { - resize_async(size, condition, options, context).wait(); + return set_premium_blob_tier_async(tier, condition, options, context, pplx::cancellation_token::none()); } /// - /// Intitiates an asynchronous operation to resize the page blob to the specified size. + /// Initiates an asynchronous operation to set premium account's blob tier. /// - /// The size of the page blob, in bytes. - /// A object that represents the current operation. - pplx::task resize_async(utility::size64_t size) - { - return resize_async(size, access_condition(), blob_request_options(), operation_context()); - } + /// An enum that represents the blob tier to be set. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + WASTORAGE_API pplx::task set_premium_blob_tier_async(const premium_blob_tier tier, const access_condition & condition, const blob_request_options & options, operation_context context, const pplx::cancellation_token& cancellation_token); /// - /// Intitiates an asynchronous operation to resize the page blob to the specified size. + /// Begins an operation to copy a blob's contents, properties, and metadata to a new blob. /// - /// The size of the page blob, in bytes. - /// An object that represents the access condition for the operation. + /// The URI of a source blob. + /// An enum that represents the for the destination blob. + /// An object that represents the for the source blob. + /// An object that represents the for the destination blob. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. - /// A object that represents the current operation. - WASTORAGE_API pplx::task resize_async(utility::size64_t size, const access_condition& condition, const blob_request_options& options, operation_context context); + /// The copy ID associated with the copy operation. + /// + /// This method fetches the blob's ETag, last-modified time, and part of the copy state. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + utility::string_t start_copy(const web::http::uri& source, const azure::storage::premium_blob_tier tier, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context) + { + return start_copy_async(source, tier, source_condition, destination_condition, options, context).get(); + } /// - /// Sets the page blob's sequence number. + /// Begins an operation to copy a blob's contents, properties, and metadata to a new blob. /// - /// A value of type , indicating the operation to perform on the sequence number. - void set_sequence_number(const azure::storage::sequence_number& sequence_number) + /// The URI of a source blob. + /// An enum that represents the for the destination blob. + /// Metadata that will be set on the destination blob. + /// An object that represents the for the source blob. + /// An object that represents the for the destination blob. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// The copy ID associated with the copy operation. + /// + /// This method fetches the blob's ETag, last-modified time, and part of the copy state. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + utility::string_t start_copy(const web::http::uri& source, const azure::storage::premium_blob_tier tier, const cloud_metadata& metadata, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context) { - set_sequence_number_async(sequence_number).wait(); + return start_copy_async(source, tier, metadata, source_condition, destination_condition, options, context, pplx::cancellation_token::none()).get(); } /// - /// Sets the page blob's sequence number. + /// Initiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. /// - /// A value of type , indicating the operation to perform on the sequence number. - /// An object that represents the access condition for the operation. + /// The URI of a source blob. + /// An enum that represents the for the destination blob. + /// An object that represents the for the source blob. + /// An object that represents the for the destination blob. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. - void set_sequence_number(const azure::storage::sequence_number& sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context) + /// A object of type that represents the current operation. + /// + /// This method fetches the blob's ETag, last-modified time, and part of the copy state. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + pplx::task start_copy_async(const web::http::uri& source, const premium_blob_tier tier, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context) { - set_sequence_number_async(sequence_number, condition, options, context).wait(); + return start_copy_async(source, tier, source_condition, destination_condition, options, context, pplx::cancellation_token::none()); } /// - /// Intitiates an asynchronous operation to set the page blob's sequence number. + /// Initiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. /// - /// A value of type , indicating the operation to perform on the sequence number. - /// A object that represents the current operation. - pplx::task set_sequence_number_async(const azure::storage::sequence_number& sequence_number) + /// The URI of a source blob. + /// An enum that represents the for the destination blob. + /// An object that represents the for the source blob. + /// An object that represents the for the destination blob. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + /// + /// This method fetches the blob's ETag, last-modified time, and part of the copy state. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + pplx::task start_copy_async(const web::http::uri& source, const premium_blob_tier tier, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { - return set_sequence_number_async(sequence_number, access_condition(), blob_request_options(), operation_context()); + return start_copy_async(source, tier, cloud_metadata(), source_condition, destination_condition, options, context, cancellation_token); } /// - /// Intitiates an asynchronous operation to set the page blob's sequence number. + /// Initiates an asynchronous operation to begin to copy a blob's contents, properties, and metadata to a new blob. /// - /// A value of type , indicating the operation to perform on the sequence number. - /// An object that represents the access condition for the operation. + /// The URI of a source blob. + /// An enum that represents the for the destination blob. + /// Metadata that will be set on the destination blob. + /// An object that represents the for the source blob. + /// An object that represents the for the destination blob. /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. - /// A object that represents the current operation. - WASTORAGE_API pplx::task set_sequence_number_async(const azure::storage::sequence_number& sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context); - + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + /// + /// This method fetches the blob's ETag, last-modified time, and part of the copy state. + /// The copy ID and copy status fields are fetched, and the rest of the copy state is cleared. + /// + pplx::task start_copy_async(const web::http::uri& source, const premium_blob_tier tier, const cloud_metadata& metadata, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + return start_copy_async_impl(source, tier, metadata, source_condition, destination_condition, options, context, cancellation_token); + } private: /// @@ -6314,8 +8259,13 @@ namespace azure { namespace storage { set_type(blob_type::page_blob); } + WASTORAGE_API pplx::task upload_pages_async_impl(concurrency::streams::istream source, int64_t start_offset, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_timeout, std::shared_ptr timer_handler = nullptr); + WASTORAGE_API pplx::task open_write_async_impl(utility::size64_t size, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_request_level_timeout, std::shared_ptr timer_handler = nullptr); + WASTORAGE_API pplx::task> download_page_ranges_diff_async_impl(const utility::string_t& previous_snapshot_time, const utility::string_t& previous_snapshot_url, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; + friend class cloud_blob_container; friend class cloud_blob_directory; + friend class core::basic_cloud_page_blob_ostreambuf; }; /// @@ -6399,7 +8349,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to create an empty append blob. If the blob already exists, this will replace it. To avoid overwriting and instead throw an error, please pass in an + /// Initiates an asynchronous operation to create an empty append blob. If the blob already exists, this will replace it. To avoid overwriting and instead throw an error, please pass in an /// parameter generated using /// /// A object that represents the current operation. @@ -6409,66 +8359,96 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to create an empty append blob. If the blob already exists, this will replace it. To avoid overwriting and instead throw an error, please pass in an + /// Initiates an asynchronous operation to create an empty append blob. If the blob already exists, this will replace it. To avoid overwriting and instead throw an error, please pass in an + /// parameter generated using + /// + /// An object that represents the access condition for the operation. + /// A object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task create_or_replace_async(const access_condition& condition, const blob_request_options& options, operation_context context) + { + return create_or_replace_async(condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to create an empty append blob. If the blob already exists, this will replace it. To avoid overwriting and instead throw an error, please pass in an /// parameter generated using /// /// An object that represents the access condition for the operation. /// A object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task create_or_replace_async(const access_condition& condition, const blob_request_options& options, operation_context context); + pplx::task create_or_replace_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + return create_or_replace_async_impl(condition, options, context, cancellation_token); + } /// /// Commits a new block of data to the end of the blob. /// /// A stream that provides the data for the block. - /// An optional hash value that will be used to to ensure transactional integrity - /// for the block. May be an empty string. + /// A hash value used to ensure transactional integrity. May be or a base64-encoded MD5 string or CRC64 integer. /// The offset in bytes at which the block was committed to. - int64_t append_block(concurrency::streams::istream block_data, const utility::string_t& content_md5) const + int64_t append_block(concurrency::streams::istream block_data, const checksum& content_checksum) const { - return append_block_async(block_data, content_md5).get(); + return append_block_async(block_data, content_checksum).get(); } /// /// Commits a new block of data to the end of the blob. /// /// A stream that provides the data for the block. - /// An optional hash value that will be used to to ensure transactional integrity - /// for the block. May be an empty string. + /// A hash value used to ensure transactional integrity. May be or a base64-encoded MD5 string or CRC64 integer. /// An object that represents the access condition for the operation. /// A object that specifies additional options for the request. /// An object that represents the context for the current operation. /// The offset in bytes at which the block was committed to. - int64_t append_block(concurrency::streams::istream block_data, const utility::string_t& content_md5, const access_condition& condition, const blob_request_options& options, operation_context context) const + int64_t append_block(concurrency::streams::istream block_data, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, operation_context context) const { - return append_block_async(block_data, content_md5, condition, options, context).get(); + return append_block_async(block_data, content_checksum, condition, options, context).get(); } /// - /// Intitiates an asynchronous operation to commit a new block of data to the end of the blob. + /// Initiates an asynchronous operation to commit a new block of data to the end of the blob. /// /// A stream that provides the data for the block. - /// An optional hash value that will be used to to ensure transactional integrity - /// for the block. May be an empty string. + /// A hash value used to ensure transactional integrity. May be or a base64-encoded MD5 string or CRC64 integer. /// A object that represents the current operation. - pplx::task append_block_async(concurrency::streams::istream block_data, const utility::string_t& content_md5) const + pplx::task append_block_async(concurrency::streams::istream block_data, const checksum& content_checksum) const { - return append_block_async(block_data, content_md5, access_condition(), blob_request_options(), operation_context()); + return append_block_async(block_data, content_checksum, access_condition(), blob_request_options(), operation_context()); } /// - /// Intitiates an asynchronous operation to commit a new block of data to the end of the blob. + /// Initiates an asynchronous operation to commit a new block of data to the end of the blob. /// /// A stream that provides the data for the block. - /// An optional hash value that will be used to to ensure transactional integrity - /// for the block. May be an empty string. + /// A hash value used to ensure transactional integrity. May be or a base64-encoded MD5 string or CRC64 integer. /// An object that represents the access condition for the operation. /// A object that specifies additional options for the request. /// An object that represents the context for the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task append_block_async(concurrency::streams::istream block_data, const utility::string_t& content_md5, const access_condition& condition, const blob_request_options& options, operation_context context) const; + pplx::task append_block_async(concurrency::streams::istream block_data, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, operation_context context) const + { + return append_block_async(block_data, content_checksum, condition, options, context, pplx::cancellation_token::none()); + } + /// + /// Initiates an asynchronous operation to commit a new block of data to the end of the blob. + /// + /// A stream that provides the data for the block. + /// A hash value used to ensure transactional integrity. May be or a base64-encoded MD5 string or CRC64 integer. + /// An object that represents the access condition for the operation. + /// A object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + pplx::task append_block_async(concurrency::streams::istream block_data, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const + { + return append_block_async_impl(block_data, content_checksum, condition, options, context, cancellation_token, true); + } /// /// Downloads the blob's contents as a string. /// @@ -6491,7 +8471,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to download the blob's contents as a string. + /// Initiates an asynchronous operation to download the blob's contents as a string. /// /// A object of type that represents the current operation. pplx::task download_text_async() @@ -6500,13 +8480,26 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to download the blob's contents as a string. + /// Initiates an asynchronous operation to download the blob's contents as a string. + /// + /// An object that represents the access condition for the operation. + /// A object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task download_text_async(const access_condition& condition, const blob_request_options& options, operation_context context) + { + return download_text_async(condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to download the blob's contents as a string. /// /// An object that represents the access condition for the operation. /// A object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task download_text_async(const access_condition& condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task download_text_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Opens a stream for writing to the append blob. @@ -6532,7 +8525,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to open a stream for writing to the append blob. + /// Initiates an asynchronous operation to open a stream for writing to the append blob. /// /// Use true to create a new append blob or overwrite an existing one, false to append to an existing blob. /// A object of type that represents the current operation. @@ -6542,14 +8535,31 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to open a stream for writing to the append blob. + /// Initiates an asynchronous operation to open a stream for writing to the append blob. + /// + /// Use true to create a new append blob or overwrite an existing one, false to append to an existing blob. + /// An object that represents the access condition for the operation. + /// A object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task open_write_async(bool create_new, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return open_write_async(create_new, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to open a stream for writing to the append blob. /// /// Use true to create a new append blob or overwrite an existing one, false to append to an existing blob. /// An object that represents the access condition for the operation. /// A object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task open_write_async(bool create_new, const access_condition& condition, const blob_request_options& options, operation_context context); + pplx::task open_write_async(bool create_new, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + return open_write_async_impl(create_new, condition, options, context, cancellation_token, true); + } /// /// Uploads a stream to an append blob. If the blob already exists on the service, it will be overwritten. @@ -6620,7 +8630,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a stream to the append blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to upload a stream to the append blob. If the blob already exists on the service, it will be overwritten. /// /// The stream providing the blob content. /// A object that represents the current operation. @@ -6635,7 +8645,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a stream to the append blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to upload a stream to the append blob. If the blob already exists on the service, it will be overwritten. /// /// The stream providing the blob content. /// An object that represents the access condition for the operation. @@ -6655,7 +8665,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a stream to the append blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to upload a stream to the append blob. If the blob already exists on the service, it will be overwritten. /// /// The stream providing the blob content. /// The number of bytes to write from the source stream at its current position. @@ -6671,13 +8681,35 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a stream to the append blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to upload a stream to the append blob. If the blob already exists on the service, it will be overwritten. + /// + /// The stream providing the blob content. + /// The number of bytes to write from the source stream at its current position. + /// An object that represents the access condition for the operation. + /// A object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + /// + /// This API should be used strictly in a single writer scenario because the API internally uses the + /// append-offset conditional header to avoid duplicate blocks. + /// If you are guaranteed to have a single writer scenario, please look at + /// and see if setting this flag to true is acceptable for you. + /// If you want to append data to an already existing blob, please look at append_from_stream_async method. + /// + pplx::task upload_from_stream_async(concurrency::streams::istream source, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return upload_from_stream_async(source, length, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to upload a stream to the append blob. If the blob already exists on the service, it will be overwritten. /// /// The stream providing the blob content. /// The number of bytes to write from the source stream at its current position. /// An object that represents the access condition for the operation. /// A object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. /// /// This API should be used strictly in a single writer scenario because the API internally uses the @@ -6686,7 +8718,7 @@ namespace azure { namespace storage { /// and see if setting this flag to true is acceptable for you. /// If you want to append data to an already existing blob, please look at append_from_stream_async method. /// - WASTORAGE_API pplx::task upload_from_stream_async(concurrency::streams::istream source, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task upload_from_stream_async(concurrency::streams::istream source, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Uploads a file to the append blob. If the blob already exists on the service, it will be overwritten. @@ -6722,7 +8754,7 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a file to the append blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to upload a file to the append blob. If the blob already exists on the service, it will be overwritten. /// /// The file providing the blob content. /// A object that represents the current operation. @@ -6737,12 +8769,33 @@ namespace azure { namespace storage { } /// - /// Intitiates an asynchronous operation to upload a file to the append blob. If the blob already exists on the service, it will be overwritten. + /// Initiates an asynchronous operation to upload a file to the append blob. If the blob already exists on the service, it will be overwritten. + /// + /// The file providing the blob content. + /// An object that represents the access condition for the operation. + /// A object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + /// + /// This API should be used strictly in a single writer scenario because the API internally uses the + /// append-offset conditional header to avoid duplicate blocks. + /// If you are guaranteed to have a single writer scenario, please look at + /// and see if setting this flag to true is acceptable for you. + /// If you want to append data to an already existing blob, please look at append_from_file_async method. + /// + pplx::task upload_from_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return upload_from_file_async(path, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to upload a file to the append blob. If the blob already exists on the service, it will be overwritten. /// /// The file providing the blob content. /// An object that represents the access condition for the operation. /// A object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. /// /// This API should be used strictly in a single writer scenario because the API internally uses the @@ -6751,7 +8804,7 @@ namespace azure { namespace storage { /// and see if setting this flag to true is acceptable for you. /// If you want to append data to an already existing blob, please look at append_from_file_async method. /// - WASTORAGE_API pplx::task upload_from_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context); + WASTORAGE_API pplx::task upload_from_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Uploads a string of text to the append blob. If the blob already exists on the service, it will be overwritten. @@ -6816,7 +8869,28 @@ namespace azure { namespace storage { /// and see if setting this flag to true is acceptable for you. /// If you want to append data to an already existing blob, please look at append_text_async method. /// - WASTORAGE_API pplx::task upload_text_async(const utility::string_t& content, const access_condition& condition, const blob_request_options& options, operation_context context); + pplx::task upload_text_async(const utility::string_t& content, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return upload_text_async(content, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Uploads a string of text to the append blob. If the blob already exists on the service, it will be overwritten. + /// + /// A string containing the text to upload. + /// An object that represents the access condition for the operation. + /// A object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + /// + /// This API should be used strictly in a single writer scenario because the API internally uses the + /// append-offset conditional header to avoid duplicate blocks. + /// If you are guaranteed to have a single writer scenario, please look at + /// and see if setting this flag to true is acceptable for you. + /// If you want to append data to an already existing blob, please look at append_text_async method. + /// + WASTORAGE_API pplx::task upload_text_async(const utility::string_t& content, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Appends a stream to an append blob. This API should be used strictly in a single writer scenario because the API internally uses the @@ -6913,7 +8987,23 @@ namespace azure { namespace storage { /// A object that specifies additional options for the request. /// An object that represents the context for the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task append_from_stream_async(concurrency::streams::istream source, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context); + pplx::task append_from_stream_async(concurrency::streams::istream source, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return append_from_stream_async(source, length, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to append a stream to an append blob. This API should be used strictly in a single writer scenario because the API internally uses the + /// append-offset conditional header to avoid duplicate blocks which does not work in a multiple writer scenario. + /// + /// A object providing the blob content. + /// The number of bytes to write from the source stream at its current position. + /// An object that represents the access condition for the operation. + /// A object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + WASTORAGE_API pplx::task append_from_stream_async(concurrency::streams::istream source, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Appends a file to an append blob. This API should be used strictly in a single writer scenario because the API internally uses the @@ -6958,7 +9048,22 @@ namespace azure { namespace storage { /// A object that specifies additional options for the request. /// An object that represents the context for the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task append_from_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context); + pplx::task append_from_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return append_from_file_async(path, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to append a file to an append blob. This API should be used strictly in a single writer scenario because the API internally uses the + /// append-offset conditional header to avoid duplicate blocks which does not work in a multiple writer scenario. + /// + /// The file providing the blob content. + /// An object that represents the access condition for the operation. + /// A object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + WASTORAGE_API pplx::task append_from_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); /// /// Appends a string of text to an append blob. This API should be used strictly in a single writer scenario @@ -7003,7 +9108,22 @@ namespace azure { namespace storage { /// A object that specifies additional options for the request. /// An object that represents the context for the current operation. /// A object that represents the current operation. - WASTORAGE_API pplx::task append_text_async(const utility::string_t& content, const access_condition& condition, const blob_request_options& options, operation_context context); + pplx::task append_text_async(const utility::string_t& content, const access_condition& condition, const blob_request_options& options, operation_context context) + { + return append_text_async(content, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to append a string of text to an append blob. This API should be used strictly in a single writer scenario + /// because the API internally uses the append-offset conditional header to avoid duplicate blocks which does not work in a multiple writer scenario. + /// + /// A string containing the text to append. + /// An object that represents the access condition for the operation. + /// A object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + WASTORAGE_API pplx::task append_text_async(const utility::string_t& content, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token); private: @@ -7029,11 +9149,16 @@ namespace azure { namespace storage { /// An object that represents the access condition for the operation. /// A object that specifies additional options for the request. /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. /// A object that represents the current operation. - pplx::task upload_from_stream_internal_async(concurrency::streams::istream source, utility::size64_t length, bool create_new, const access_condition& condition, const blob_request_options& options, operation_context context); + pplx::task upload_from_stream_internal_async(concurrency::streams::istream source, utility::size64_t length, bool create_new, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, std::shared_ptr timer_handler = nullptr); + WASTORAGE_API pplx::task append_block_async_impl(concurrency::streams::istream block_data, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_timeout, std::shared_ptr timer_handler = nullptr) const; + WASTORAGE_API pplx::task open_write_async_impl(bool create_new, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_request_level_timeout, std::shared_ptr timer_handler = nullptr); + WASTORAGE_API pplx::task create_or_replace_async_impl(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, std::shared_ptr timer_handler = nullptr); friend class cloud_blob_container; friend class cloud_blob_directory; + friend class core::basic_cloud_append_blob_ostreambuf; }; /// @@ -7048,13 +9173,15 @@ namespace azure { namespace storage { /// /// The name of the blob. /// The snapshot timestamp, if the blob is a snapshot. + /// The version id of the blob. + /// If this blob version is current active version. /// A reference to the parent container. /// A set of properties for the blob. /// User-defined metadata for the blob. /// the state of the most recent or pending copy operation. - explicit list_blob_item(utility::string_t blob_name, utility::string_t snapshot_time, cloud_blob_container container, cloud_blob_properties properties, cloud_metadata metadata, copy_state copy_state) + explicit list_blob_item(utility::string_t blob_name, utility::string_t snapshot_time, utility::string_t version_id, bool is_current_version, cloud_blob_container container, cloud_blob_properties properties, cloud_metadata metadata, copy_state copy_state) : m_is_blob(true), m_name(std::move(blob_name)), m_container(std::move(container)), - m_snapshot_time(std::move(snapshot_time)), m_properties(std::move(properties)), + m_snapshot_time(std::move(snapshot_time)), m_version_id(std::move(version_id)), m_is_current_version(is_current_version), m_properties(std::move(properties)), m_metadata(std::move(metadata)), m_copy_state(std::move(copy_state)) { } @@ -7095,6 +9222,8 @@ namespace azure { namespace storage { m_name = std::move(other.m_name); m_container = std::move(other.m_container); m_snapshot_time = std::move(other.m_snapshot_time); + m_version_id = std::move(other.m_version_id); + m_is_current_version = other.m_is_current_version; m_properties = std::move(other.m_properties); m_metadata = std::move(other.m_metadata); m_copy_state = std::move(other.m_copy_state); @@ -7113,6 +9242,15 @@ namespace azure { namespace storage { return m_is_blob; } + /// + /// Gets a value indicating whether this represents current active version of a blob. + /// + /// true if this represents current active version of a blob; otherwise, false. + bool is_current_version() const + { + return m_is_current_version; + } + /// /// Returns the item as an object, if and only if it represents a cloud blob. /// @@ -7124,7 +9262,12 @@ namespace azure { namespace storage { throw std::runtime_error("Cannot access a cloud blob directory as cloud blob "); } - return cloud_blob(m_name, m_snapshot_time, m_container, m_properties, m_metadata, m_copy_state); + auto blob = cloud_blob(m_name, m_snapshot_time, m_container, m_properties, m_metadata, m_copy_state); + if (!m_version_id.empty()) + { + blob.set_version_id(m_version_id); + } + return blob; } /// @@ -7147,9 +9290,12 @@ namespace azure { namespace storage { utility::string_t m_name; cloud_blob_container m_container; utility::string_t m_snapshot_time; + utility::string_t m_version_id; + bool m_is_current_version = false; cloud_blob_properties m_properties; cloud_metadata m_metadata; copy_state m_copy_state; }; - }} // namespace azure::storage + +#pragma pop_macro("max") diff --git a/Microsoft.WindowsAzure.Storage/includes/was/common.h b/Microsoft.WindowsAzure.Storage/includes/was/common.h index 7e9e8874..6d62cedb 100644 --- a/Microsoft.WindowsAzure.Storage/includes/was/common.h +++ b/Microsoft.WindowsAzure.Storage/includes/was/common.h @@ -30,6 +30,9 @@ #include #endif +#pragma push_macro("min") +#undef min + namespace azure { namespace storage { namespace protocol @@ -805,7 +808,7 @@ namespace azure { namespace storage { /// Initializes a new instance of the class. /// metrics_properties() - : m_include_apis(false), m_retention_enabled(false), m_retention_days(0) + : m_enabled(false), m_include_apis(false), m_retention_enabled(false), m_retention_days(0) { } @@ -950,7 +953,7 @@ namespace azure { namespace storage { /// /// Initializes a new instance of the class. /// - cors_rule() + cors_rule() : m_max_age(0) { } @@ -1537,7 +1540,7 @@ namespace azure { namespace storage { } /// - /// Sets the start time of the requeset. + /// Sets the start time of the request. /// /// The start time of the request. void set_start_time(utility::datetime start_time) @@ -1555,7 +1558,7 @@ namespace azure { namespace storage { } /// - /// Sets the end time of the requeset. + /// Sets the end time of the request. /// /// The end time of the request. void set_end_time(utility::datetime end_time) @@ -1706,8 +1709,42 @@ namespace azure { namespace storage { { m_logger = std::move(logger); } + + /// + /// Sets a callback to enable custom setting of the ssl context, at construction time. + /// + /// A user callback allowing for customization of the ssl context at construction time. + void set_ssl_context_callback(const std::function& callback) + { + m_ssl_context_callback = callback; + } + + /// + /// Gets the user's callback to allow for customization of the ssl context. + /// + const std::function& get_ssl_context_callback() const + { + return m_ssl_context_callback; + } #endif + /// + /// Sets a callback to enable custom setting of platform specific options. + /// + /// A user callback allowing for customization of the session. + void set_native_session_handle_options_callback(const std::function& callback) + { + m_native_session_handle_options_callback = callback; + } + + /// + /// Gets the user's callback to custom setting of platform specific options. + /// + const std::function& get_native_session_handle_options_callback() const + { + return m_native_session_handle_options_callback; + } + private: std::function m_sending_request; @@ -1716,13 +1753,15 @@ namespace azure { namespace storage { web::http::http_headers m_user_headers; utility::datetime m_start_time; utility::datetime m_end_time; - client_log_level m_log_level; + client_log_level m_log_level = client_log_level::log_level_off; web::web_proxy m_proxy; std::vector m_request_results; pplx::extensibility::critical_section_t m_request_results_lock; #ifndef _WIN32 boost::log::sources::severity_logger m_logger; + std::function m_ssl_context_callback; //No need to initialize as CPPRest does not initialize it. #endif + std::function m_native_session_handle_options_callback; }; /// @@ -1951,8 +1990,42 @@ namespace azure { namespace storage { { m_impl->set_logger(std::move(logger)); } + + /// + /// Sets a callback to enable custom setting of the ssl context, at construction time. + /// + /// A user callback allowing for customization of the ssl context at construction time. + void set_ssl_context_callback(const std::function& callback) + { + m_impl->set_ssl_context_callback(callback); + } + + /// + /// Gets the user's callback to allow for customization of the ssl context. + /// + const std::function& get_ssl_context_callback() const + { + return m_impl->get_ssl_context_callback(); + } #endif + /// + /// Sets a callback to enable custom setting of platform specific options. + /// + /// A user callback allowing for customization of the session. + void set_native_session_handle_options_callback(const std::function& callback) + { + m_impl->set_native_session_handle_options_callback(callback); + } + + /// + /// Gets the user's callback to custom setting of platform specific options. + /// + const std::function& get_native_session_handle_options_callback() const + { + return m_impl->get_native_session_handle_options_callback(); + } + std::shared_ptr<_operation_context> _get_impl() const { return m_impl; @@ -2506,11 +2579,11 @@ namespace azure { namespace storage { { public: - // TODO: Optimize request_options to make copying and duplicating these objects unnecesary (maybe make it immutable) + // TODO: Optimize request_options to make copying and duplicating these objects unnecessary (maybe make it immutable) // TODO: Consider not overwriting unset values in request_options with the service's defaults because it is a confusing interface (the service's defaults would be used only when the user does not supply a request_options parameter) #if defined(_MSC_VER) && _MSC_VER < 1900 - // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, + // Compilers that fully support C++ 11 r-value reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, // have implicitly-declared move constructor and move assignment operator. /// @@ -2593,7 +2666,7 @@ namespace azure { namespace storage { /// Gets the maximum execution time across all potential retries. /// /// The maximum execution time. - const std::chrono::seconds maximum_execution_time() const + const std::chrono::milliseconds maximum_execution_time() const { return m_maximum_execution_time; } @@ -2602,11 +2675,25 @@ namespace azure { namespace storage { /// Sets the maximum execution time across all potential retries. /// /// The maximum execution time. - void set_maximum_execution_time(std::chrono::seconds maximum_execution_time) + /// + /// This option will not control the total execution time in async open read/open write operations. It will be set in + /// each underline request of read/write/close operation to make sure those request is finished within this time, or + /// timeout if otherwise. + /// + void set_maximum_execution_time(const std::chrono::milliseconds& maximum_execution_time) { m_maximum_execution_time = maximum_execution_time; } + /// + /// Gets if the maximum execution time is set by customer. + /// + /// True if the maximum execution time is set by customer. False otherwise + bool is_maximum_execution_time_customized() const + { + return m_maximum_execution_time.has_value(); + } + /// /// Gets the location mode of the request. /// @@ -2649,11 +2736,33 @@ namespace azure { namespace storage { m_http_buffer_size = http_buffer_size; } + /// + /// Gets the server certificate validation property. + /// + /// True if certificates are to be verified, false otherwise + bool validate_certificates() const + { + return m_validate_certificates; + } + + /// + /// Sets the server certificate validation property. + /// + /// False to disable all server certificate validation, true otherwise. + /// + /// Disabling certificate validation is not recommended and will make the user exposed to unsecure environment. + /// Please use with caution and at your own risk. + /// + void set_validate_certificates(bool validate_certificates) + { + m_validate_certificates = validate_certificates; + } + /// /// Gets the expiry time across all potential retries for the request. /// /// The expiry time. - utility::datetime operation_expiry_time() const + std::chrono::time_point operation_expiry_time() const { return m_operation_expiry_time; } @@ -2683,30 +2792,32 @@ namespace azure { namespace storage { m_maximum_execution_time.merge(other.m_maximum_execution_time); m_location_mode.merge(other.m_location_mode); m_http_buffer_size.merge(other.m_http_buffer_size); + m_validate_certificates.merge(other.m_validate_certificates); if (apply_expiry) { - auto expiry_in_seconds = static_cast(m_maximum_execution_time).count(); - if (!m_operation_expiry_time.is_initialized() && (expiry_in_seconds > 0)) + auto expiry_in_milliseconds = static_cast(m_maximum_execution_time); + if ((m_operation_expiry_time.time_since_epoch().count() == 0) && (expiry_in_milliseconds.count() > 0)) { // This should not be copied from the other options, since // this value should never have a default. Only if it has // not been initialized by the copy constructor, now is the // time to initialize it. - m_operation_expiry_time = utility::datetime::utc_now() + utility::datetime::from_seconds(static_cast(expiry_in_seconds)); + m_operation_expiry_time = std::chrono::system_clock::now() + expiry_in_milliseconds; } } } private: - utility::datetime m_operation_expiry_time; + std::chrono::time_point m_operation_expiry_time; azure::storage::retry_policy m_retry_policy; option_with_default m_noactivity_timeout; option_with_default m_server_timeout; - option_with_default m_maximum_execution_time; + option_with_default m_maximum_execution_time; option_with_default m_location_mode; option_with_default m_http_buffer_size; + option_with_default m_validate_certificates; }; /// @@ -2748,13 +2859,30 @@ namespace azure { namespace storage { public: copy_state() - : m_bytes_copied(0), m_total_bytes(0), m_status(copy_status::invalid) { } -#if defined(_MSC_VER) && _MSC_VER < 1900 - // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, - // have implicitly-declared move constructor and move assignment operator. + copy_state(const copy_state& other) + { + *this = other; + } + + copy_state& operator=(const copy_state& other) + { + if (this != &other) + { + m_copy_id = other.m_copy_id; + m_completion_time = other.m_completion_time; + m_status_description = other.m_status_description; + m_bytes_copied = other.m_bytes_copied; + m_total_bytes = other.m_total_bytes; + m_status = other.m_status; + m_source = other.m_source; + *m_source_uri = *other.m_source_uri; + m_destination_snapshot_time = other.m_destination_snapshot_time; + } + return *this; + } /// /// Initializes a new instance of the class based on an existing instance. @@ -2781,10 +2909,11 @@ namespace azure { namespace storage { m_total_bytes = std::move(other.m_total_bytes); m_status = std::move(other.m_status); m_source = std::move(other.m_source); + *m_source_uri = std::move(*other.m_source_uri); + m_destination_snapshot_time = std::move(other.m_destination_snapshot_time); } return *this; } -#endif /// /// Gets the ID of the copy blob operation. @@ -2819,6 +2948,19 @@ namespace azure { namespace storage { /// /// A indicating the source of a copy operation. const web::http::uri& source() const + { + if (m_source_uri->is_empty()) + { + *m_source_uri = m_source; + } + return *m_source_uri; + } + + /// + /// Gets the URI string of the source blob for a copy operation. + /// + /// A indicating the source of a copy operation. + const utility::string_t& source_raw() const { return m_source; } @@ -2850,18 +2992,31 @@ namespace azure { namespace storage { return m_status_description; } + /// + /// Gets the incremental destination snapshot time for the latest incremental copy, if the time is available. + /// + /// A containing the destination snapshot time for the latest incremental copy. + utility::datetime destination_snapshot_time() const + { + return m_destination_snapshot_time; + } + private: utility::string_t m_copy_id; utility::datetime m_completion_time; utility::string_t m_status_description; - int64_t m_bytes_copied; - int64_t m_total_bytes; - copy_status m_status; - web::http::uri m_source; + int64_t m_bytes_copied = 0; + int64_t m_total_bytes = 0; + copy_status m_status = copy_status::invalid; + utility::string_t m_source; + std::unique_ptr m_source_uri = std::unique_ptr(new web::http::uri()); + utility::datetime m_destination_snapshot_time; friend class protocol::response_parsers; friend class protocol::list_blobs_reader; }; }} // namespace azure::storage + +#pragma pop_macro("min") diff --git a/Microsoft.WindowsAzure.Storage/includes/was/core.h b/Microsoft.WindowsAzure.Storage/includes/was/core.h index 8003d9c9..3c516a61 100644 --- a/Microsoft.WindowsAzure.Storage/includes/was/core.h +++ b/Microsoft.WindowsAzure.Storage/includes/was/core.h @@ -22,6 +22,9 @@ #include "wascore/basic_types.h" #include "wascore/constants.h" +#pragma push_macro("max") +#undef max + namespace azure { namespace storage { class operation_context; @@ -115,7 +118,7 @@ namespace azure { namespace storage { WASTORAGE_API storage_uri(web::http::uri primary_uri, web::http::uri secondary_uri); #if defined(_MSC_VER) && _MSC_VER < 1900 - // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, + // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, // have implicitly-declared move constructor and move assignment operator. /// @@ -209,13 +212,28 @@ namespace azure { namespace storage { { } + class account_key_credential + { + public: + account_key_credential(std::vector account_key = std::vector()) : m_account_key(std::move(account_key)) + { + } + + public: + std::vector m_account_key; + + private: + pplx::extensibility::reader_writer_lock_t m_mutex; + + friend class storage_credentials; + }; + /// /// Initializes a new instance of the class with the specified account name and key value. /// /// A string containing the name of the storage account. /// A string containing the Base64-encoded account access key. - storage_credentials(utility::string_t account_name, const utility::string_t& account_key) - : m_account_name(std::move(account_name)), m_account_key(utility::conversions::from_base64(account_key)) + storage_credentials(utility::string_t account_name, const utility::string_t& account_key) : m_account_name(std::move(account_name)), m_account_key_credential(std::make_shared(utility::conversions::from_base64(account_key))) { } @@ -224,27 +242,49 @@ namespace azure { namespace storage { /// /// A string containing the name of the storage account. /// An array of bytes that represent the account access key. - storage_credentials(utility::string_t account_name, std::vector account_key) - : m_account_name(std::move(account_name)), m_account_key(std::move(account_key)) + storage_credentials(utility::string_t account_name, std::vector account_key) : m_account_name(std::move(account_name)), m_account_key_credential(std::make_shared(std::move(account_key))) { } + class sas_credential + { + public: + explicit sas_credential(utility::string_t sas_token) : m_sas_token(std::move(sas_token)) + { + } + + private: + utility::string_t m_sas_token; + + friend class storage_credentials; + }; + /// /// Initializes a new instance of the class with the specified shared access signature token. /// /// A string containing the shared access signature token. explicit storage_credentials(utility::string_t sas_token) - : m_sas_token(std::move(sas_token)) + : storage_credentials(utility::string_t(), sas_credential{sas_token}) + { + } + + /// + /// Initializes a new instance of the class with the specified account name and shared access signature token. + /// + /// A string containing the name of the storage account. + /// An containing shared access signature token. + storage_credentials(utility::string_t account_name, sas_credential sas_token) + : m_account_name(std::move(account_name)), m_sas_token(std::move(sas_token.m_sas_token)) { if (m_sas_token.size() >= 1 && m_sas_token.at(0) == _XPLATSTR('?')) { m_sas_token = m_sas_token.substr(1); } - + auto splitted_query = web::uri::split_query(m_sas_token); if (!splitted_query.empty()) { - splitted_query[protocol::uri_query_sas_api_version] = protocol::header_value_storage_version; + splitted_query[protocol::uri_query_sas_api_version] = protocol::header_value_storage_version; web::uri_builder builder; for (const auto& kv : splitted_query) { @@ -254,17 +294,50 @@ namespace azure { namespace storage { } } -#if defined(_MSC_VER) && _MSC_VER < 1900 - // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, - // have implicitly-declared move constructor and move assignment operator. + class bearer_token_credential + { + public: + explicit bearer_token_credential(utility::string_t bearer_token = utility::string_t()) : m_bearer_token(std::move(bearer_token)) + { + } + + public: + utility::string_t m_bearer_token; + + private: + pplx::extensibility::reader_writer_lock_t m_mutex; + + friend class storage_credentials; + }; + + /// + /// Initializes a new instance of the class with the specified bearer token. + /// + /// A class containing bearer token. + template::type, bearer_token_credential>::value>::type* = nullptr> + explicit storage_credentials(T&& token) : storage_credentials(utility::string_t(), std::forward(token)) + { + } + + /// + /// Initializes a new instance of the class with the specified account name and bearer token. + /// + /// A string containing the name of the storage account. + /// A class containing bearer token. + template::type, bearer_token_credential>::value>::type* = nullptr> + storage_credentials(utility::string_t account_name, T&& token) : m_account_name(std::move(account_name)), m_bearer_token_credential(std::make_shared()) + { + m_bearer_token_credential->m_bearer_token = std::forward(token).m_bearer_token; + } /// /// Initializes a new instance of the class based on an existing instance. /// /// An existing object. - storage_credentials(storage_credentials&& other) + template::type, storage_credentials>::value>::type* = nullptr> + storage_credentials(T&& other) { - *this = std::move(other); + *this = std::forward(other); } /// @@ -272,18 +345,21 @@ namespace azure { namespace storage { /// /// An existing object to use to set properties. /// An object with properties set. - storage_credentials& operator=(storage_credentials&& other) + template::type, storage_credentials>::value>::type* = nullptr> + storage_credentials& operator=(T&& other) { if (this != &other) { - m_sas_token = std::move(other.m_sas_token); - m_sas_token_with_api_version = std::move(other.m_sas_token_with_api_version); - m_account_name = std::move(other.m_account_name); - m_account_key = std::move(other.m_account_key); + m_sas_token = std::forward(other).m_sas_token; + m_sas_token_with_api_version = std::forward(other).m_sas_token_with_api_version; + m_account_name = std::forward(other).m_account_name; + std::atomic_store_explicit(&m_account_key_credential, std::atomic_load_explicit(&other.m_account_key_credential, std::memory_order_acquire), std::memory_order_release); + auto key_ptr = std::forward(other).m_account_key_credential; + std::atomic_store_explicit(&m_bearer_token_credential, std::atomic_load_explicit(&other.m_bearer_token_credential, std::memory_order_acquire), std::memory_order_release); + auto token_ptr = std::forward(other).m_bearer_token_credential; } return *this; } -#endif /// /// Transforms a resource URI into a shared access signature URI, by appending a shared access token. @@ -326,7 +402,79 @@ namespace azure { namespace storage { /// An array of bytes that contains the key. const std::vector& account_key() const { - return m_account_key; + auto account_key_ptr = std::atomic_load_explicit(&m_account_key_credential, std::memory_order_acquire); + pplx::extensibility::scoped_read_lock_t guard(account_key_ptr->m_mutex); + return account_key_ptr->m_account_key; + } + + /// + /// Sets the account key for the credentials. + /// + /// A string containing the Base64-encoded account access key. + void set_account_key(const utility::string_t& account_key) + { + set_account_key(utility::conversions::from_base64(account_key)); + } + + /// + /// Sets the account key for the credentials. + /// + /// An array of bytes that represent the account access key. + void set_account_key(std::vector account_key) + { + auto account_key_ptr = std::atomic_load_explicit(&m_account_key_credential, std::memory_order_acquire); + if (!account_key_ptr) + { + auto new_credential = std::make_shared(); + new_credential->m_account_key = std::move(account_key); + /* Compares m_account_key_credential and account_key_ptr(nullptr). + * If they are equivalent, assigns new_credential into m_account_key_credential and returns true. + * If they are not equivalent, assigns m_account_key_credential into m_account_key and returns false. + */ + bool set = std::atomic_compare_exchange_strong_explicit(&m_account_key_credential, &account_key_ptr, new_credential, std::memory_order_release, std::memory_order_acquire); + if (set) { + return; + } + account_key = std::move(new_credential->m_account_key); + } + pplx::extensibility::scoped_rw_lock_t guard(account_key_ptr->m_mutex); + account_key_ptr->m_account_key = std::move(account_key); + } + + /// + /// Gets the bearer token for the credentials. + /// + /// The bearer token + utility::string_t bearer_token() const + { + auto token_ptr = std::atomic_load_explicit(&m_bearer_token_credential, std::memory_order_acquire); + pplx::extensibility::scoped_read_lock_t guard(token_ptr->m_mutex); + return token_ptr->m_bearer_token; + } + + /// + /// Sets the bearer token for the credentials. + /// + /// A string that contains bearer token. + void set_bearer_token(utility::string_t bearer_token) + { + auto token_ptr = std::atomic_load_explicit(&m_bearer_token_credential, std::memory_order_acquire); + if (!token_ptr) + { + auto new_credential = std::make_shared(); + new_credential->m_bearer_token = std::move(bearer_token); + /* Compares m_bearer_token_credential and token_ptr(nullptr). + * If they are equivalent, assigns new_credential into m_bearer_token_credential and returns true. + * If they are not equivalent, assigns m_bearer_token_credential into token_ptr and returns false. + */ + bool set = std::atomic_compare_exchange_strong_explicit(&m_bearer_token_credential, &token_ptr, new_credential, std::memory_order_release, std::memory_order_acquire); + if (set) { + return; + } + bearer_token = std::move(new_credential->m_bearer_token); + } + pplx::extensibility::scoped_rw_lock_t guard(token_ptr->m_mutex); + token_ptr->m_bearer_token = std::move(bearer_token); } /// @@ -335,7 +483,7 @@ namespace azure { namespace storage { /// true if the credentials are for anonymous access; otherwise, false. bool is_anonymous() const { - return m_sas_token.empty() && m_account_name.empty(); + return m_sas_token.empty() && !is_account_key() && !is_bearer_token(); } /// @@ -344,7 +492,7 @@ namespace azure { namespace storage { /// true if the credentials are a shared access signature token; otherwise, false. bool is_sas() const { - return !m_sas_token.empty() && m_account_name.empty(); + return !m_sas_token.empty() && !is_account_key() && !is_bearer_token(); } /// @@ -353,7 +501,35 @@ namespace azure { namespace storage { /// true if the credentials are a shared key; otherwise, false. bool is_shared_key() const { - return m_sas_token.empty() && !m_account_name.empty(); + return m_sas_token.empty() && is_account_key() && !is_bearer_token(); + } + + /// + /// Indicates whether the credentials are a bearer token. + /// + /// true if the credentials are a bearer token; otherwise false. + bool is_bearer_token() const { + auto token_ptr = std::atomic_load_explicit(&m_bearer_token_credential, std::memory_order_acquire); + if (!token_ptr) + { + return false; + } + pplx::extensibility::scoped_read_lock_t guard(token_ptr->m_mutex); + return !token_ptr->m_bearer_token.empty(); + } + + /// + /// Indicates whether the credentials are an account key. + /// + /// true if the credentials are an account key; otherwise false. + bool is_account_key() const { + auto account_key_ptr = std::atomic_load_explicit(&m_account_key_credential, std::memory_order_acquire); + if (!account_key_ptr) + { + return false; + } + pplx::extensibility::scoped_read_lock_t guard(account_key_ptr->m_mutex); + return !account_key_ptr->m_account_key.empty(); } private: @@ -361,7 +537,240 @@ namespace azure { namespace storage { utility::string_t m_sas_token; utility::string_t m_sas_token_with_api_version; utility::string_t m_account_name; - std::vector m_account_key; + // We use std::atomic_{load/store/...} functions specialized for std::shared_ptr to access this member, since std::atomic> is not available until C++20. + // These become deprecated since C++20, but still compile. + std::shared_ptr m_account_key_credential; + std::shared_ptr m_bearer_token_credential; + }; + + enum class checksum_type + { + none, + md5, + sha256, + crc64, + hmac_sha256, + }; + + using checksum_none_t = std::integral_constant; + using checksum_md5_t = std::integral_constant; + using checksum_sha256_t = std::integral_constant; + using checksum_crc64_t = std::integral_constant; + using checksum_hmac_sha256_t = std::integral_constant; + + constexpr auto checksum_none = checksum_none_t(); + constexpr auto checksum_md5 = checksum_md5_t(); + constexpr auto checksum_sha256 = checksum_sha256_t(); + constexpr auto checksum_crc64 = checksum_crc64_t(); + constexpr auto checksum_hmac_sha256 = checksum_hmac_sha256_t(); + + /// + /// Represents a checksum to verify the integrity of data during transport. + /// + class checksum + { + public: + /// + /// Initializes a new instance of the class without specifying any checksum method. + /// + checksum() : checksum(checksum_none) + { + } + + /// + /// Initializes a new instance of the class with MD5 hash value. + /// + /// A string containing base64-encoded MD5. + /// + /// If the provided string is empty, this class is initialized as if checksum method isn't specified. + /// + checksum(utility::string_t md5) : m_type(checksum_type::md5), m_str_hash(std::move(md5)) + { + if (m_str_hash.empty()) + { + m_type = checksum_type::none; + } + } + + /// + /// Initializes a new instance of the class with MD5 hash value. + /// + /// A string containing base64-encoded MD5. + /// + /// If the provided string is empty, this class is initialized as if checksum method isn't specified. + /// + checksum(const utility::char_t* md5) : checksum(utility::string_t(md5)) + { + } + + /// + /// Initializes a new instance of the class with CRC64 error-detecting code. + /// + /// An integer containing CRC64 code. + checksum(uint64_t crc64) : m_type(checksum_type::crc64), m_crc64(crc64) + { + } + + /// + /// Initializes a new instance of the class without specifying any checksum method. + /// + checksum(checksum_none_t type) : m_type(type.value) + { + } + + /// + /// Initializes a new instance of the class with MD5 hash value. + /// + /// Explicitly specified checksum type, must be . + /// A string containing base64-encoded MD5. + checksum(checksum_md5_t type, utility::string_t val) : m_type(type.value), m_str_hash(std::move(val)) + { + } + + /// + /// Initializes a new instance of the class with SHA-256 hash value. + /// + /// Explicitly specified checksum type, must be . + /// A string containing base64-encoded SHA-256. + checksum(checksum_sha256_t type, utility::string_t val) : m_type(type.value), m_str_hash(std::move(val)) + { + } + + /// + /// Initializes a new instance of the class with CRC64 error-detecting code. + /// + /// Explicitly specified checksum type, must be . + /// An integer containing CRC64 code. + checksum(checksum_crc64_t type, uint64_t val) : m_type(type.value), m_crc64(val) + { + } + + /// + /// Initializes a new instance of the class with HMAC-SHA256 authentication code. + /// + /// Explicitly specified checksum type, must be . + /// A string containing base64-encoded HMAC-SHA256 authentication code. + checksum(checksum_hmac_sha256_t type, utility::string_t val) : m_type(type.value), m_str_hash(std::move(val)) + { + } + +#if defined(_MSC_VER) && _MSC_VER < 1900 + // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, + // have implicitly-declared move constructor and move assignment operator. + + /// + /// Initializes a new instance of the class based on an existing instance. + /// + /// An existing object. + checksum(checksum&& other) + { + *this = std::move(other); + } + + /// + /// Returns a reference to an object. + /// + /// An existing object to use to set properties. + /// An object with properties set. + checksum& operator=(checksum&& other) + { + if (this != &other) + { + m_type = std::move(other.m_type); + m_str_hash = std::move(other.m_str_hash); + m_crc64 = std::move(other.m_crc64); + } + return *this; + } +#endif + + /// + /// Indicates whether this is an MD5 checksum. + /// + /// true if this is an MD5 checksum; otherwise, false. + bool is_md5() const + { + return m_type == checksum_type::md5; + } + + /// + /// Indicates whether this is an SHA-256 checksum. + /// + /// true if this is an SHA-256 checksum; otherwise, false. + bool is_sha256() const + { + return m_type == checksum_type::sha256; + } + + /// + /// Indicates whether this is an HMAC-SHA256 authentication code. + /// + /// true if this is an HMAC-SHA256 authentication code; otherwise, false. + bool is_hmac_sha256() const + { + return m_type == checksum_type::hmac_sha256; + } + + /// + /// Indicates whether this is a CRC64 checksum. + /// + /// true if this is a CRC64 checksum; otherwise, false. + bool is_crc64() const + { + return m_type == checksum_type::crc64; + } + + /// + /// Indicates whether any checksum method is used. + /// + /// true if no checksum method is used; otherwise, false. + bool empty() const + { + return m_type == checksum_type::none; + } + + /// + /// Gets the MD5 checksum. + /// + /// A string containing base64-encoded MD5. + const utility::string_t& md5() const + { + return m_str_hash; + } + + /// + /// Gets the SHA-256 checksum. + /// + /// A string containing base64-encoded SHA-256. + const utility::string_t& sha256() const + { + return m_str_hash; + } + + /// + /// Gets the HMAC-256 authentication code. + /// + /// A string containing base64-encoded HMAC-256 authentiction code. + const utility::string_t& hmac_sha256() const + { + return m_str_hash; + } + + /// + /// Gets the CRC64 error-detecting code. + /// + /// A string containing base64-encoded CRC64 error-detecting code. + utility::string_t crc64() const + { + std::vector crc64_str(sizeof(m_crc64) / sizeof(uint8_t)); + memcpy(crc64_str.data(), &m_crc64, sizeof(m_crc64)); + return utility::conversions::to_base64(crc64_str); + } + + private: + checksum_type m_type; + utility::string_t m_str_hash; + uint64_t m_crc64; }; /// @@ -389,7 +798,7 @@ namespace azure { namespace storage { } #if defined(_MSC_VER) && _MSC_VER < 1900 - // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, + // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, // have implicitly-declared move constructor and move assignment operator. /// @@ -450,6 +859,19 @@ namespace azure { namespace storage { } } + /// + /// Merges the specified value. + /// + /// The value. + void merge(const option_with_default& value) + { + if (!m_has_value) + { + *this = value; + this->m_has_value = value.has_value(); + } + } + /// /// Merges the specified value. /// @@ -460,6 +882,15 @@ namespace azure { namespace storage { merge(value.m_has_value ? (const T&)value : fallback_value); } + /// + /// Indicates whether a specified value is set. + /// + /// A boolean indicating whether a specified value is set. + bool has_value() const + { + return m_has_value; + } + private: T m_value; @@ -493,7 +924,7 @@ namespace azure { namespace storage { } #if defined(_MSC_VER) && _MSC_VER < 1900 - // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, + // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, // have implicitly-declared move constructor and move assignment operator. /// @@ -570,7 +1001,8 @@ namespace azure { namespace storage { : m_is_response_available(false), m_target_location(storage_location::unspecified), m_http_status_code(0), - m_content_length(std::numeric_limits::max()) + m_content_length(std::numeric_limits::max()), + m_request_server_encrypted(false) { } @@ -585,7 +1017,8 @@ namespace azure { namespace storage { m_target_location(target_location), m_end_time(utility::datetime::utc_now()), m_http_status_code(0), - m_content_length(std::numeric_limits::max()) + m_content_length(std::numeric_limits::max()), + m_request_server_encrypted(false) { } @@ -609,7 +1042,7 @@ namespace azure { namespace storage { WASTORAGE_API request_result(utility::datetime start_time, storage_location target_location, const web::http::http_response& response, web::http::status_code http_status_code, storage_extended_error extended_error); #if defined(_MSC_VER) && _MSC_VER < 1900 - // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, + // Compilers that fully support C++ 11 r-value reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, // have implicitly-declared move constructor and move assignment operator. /// @@ -639,6 +1072,7 @@ namespace azure { namespace storage { m_request_date = std::move(other.m_request_date); m_content_length = std::move(other.m_content_length); m_content_md5 = std::move(other.m_content_md5); + m_content_crc64 = std::move(other.m_content_crc64); m_etag = std::move(other.m_etag); m_request_server_encrypted = other.m_request_server_encrypted; m_extended_error = std::move(other.m_extended_error); @@ -737,6 +1171,15 @@ namespace azure { namespace storage { return m_content_md5; } + /// + /// Gets the content-CRC64 hash for the request. + /// + /// A string containing the content-CRC64 hash for the request. + const utility::string_t& content_crc64() const + { + return m_content_crc64; + } + /// /// Gets the ETag for the request. /// @@ -778,6 +1221,7 @@ namespace azure { namespace storage { utility::datetime m_request_date; utility::size64_t m_content_length; utility::string_t m_content_md5; + utility::string_t m_content_crc64; utility::string_t m_etag; bool m_request_server_encrypted; storage_extended_error m_extended_error; @@ -885,13 +1329,14 @@ namespace azure { namespace storage { /// The last request result. /// The next location to retry. /// The current location mode. - retry_context(int current_retry_count, request_result last_request_result, storage_location next_location, location_mode current_location_mode) - : m_current_retry_count(current_retry_count), m_last_request_result(std::move(last_request_result)), m_next_location(next_location), m_current_location_mode(current_location_mode) + /// Exception Ptr of any exception other than storage_exception + retry_context(int current_retry_count, request_result last_request_result, storage_location next_location, location_mode current_location_mode, const std::exception_ptr& nonstorage_exception = nullptr) + : m_current_retry_count(current_retry_count), m_last_request_result(std::move(last_request_result)), m_next_location(next_location), m_current_location_mode(current_location_mode), m_nonstorage_exception(nonstorage_exception) { } #if defined(_MSC_VER) && _MSC_VER < 1900 - // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, + // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, // have implicitly-declared move constructor and move assignment operator. /// @@ -957,12 +1402,22 @@ namespace azure { namespace storage { return m_current_location_mode; } + /// + /// Gets the exception_ptr of any unhandled nonstorage exception during the request. + /// Example: WinHttp exceptions for timeout (12002) + /// + /// An object that represents the nonstorage exception thrown while sending request + const std::exception_ptr& nonstorage_exception() const { + return m_nonstorage_exception; + } + private: int m_current_retry_count; request_result m_last_request_result; storage_location m_next_location; location_mode m_current_location_mode; + std::exception_ptr m_nonstorage_exception; }; /// @@ -992,7 +1447,7 @@ namespace azure { namespace storage { } #if defined(_MSC_VER) && _MSC_VER < 1900 - // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, + // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, // have implicitly-declared move constructor and move assignment operator. /// @@ -1166,8 +1621,270 @@ namespace azure { namespace storage { std::shared_ptr m_policy; }; + /// + /// The lease state of a resource. + /// + enum class lease_state + { + /// + /// The lease state is not specified. + /// + unspecified, + + /// + /// The lease is in the Available state. + /// + available, + + /// + /// The lease is in the Leased state. + /// + leased, + + /// + /// The lease is in the Expired state. + /// + expired, + + /// + /// The lease is in the Breaking state. + /// + breaking, + + /// + /// The lease is in the Broken state. + /// + broken, + }; + + /// + /// The lease status of a resource. + /// + enum class lease_status + { + /// + /// The lease status is not specified. + /// + unspecified, + + /// + /// The resource is locked. + /// + locked, + + /// + /// The resource is available to be locked. + /// + unlocked + }; + + /// + /// Specifies the proposed duration of seconds that the lease should continue before it is broken. + /// + class lease_break_period + { + public: + /// + /// Initializes a new instance of the class that breaks + /// a fixed-duration lease after the remaining lease period elapses, or breaks an infinite lease immediately. + /// + lease_break_period() + : m_seconds(std::chrono::seconds::max()) + { + } + + /// + /// Initializes a new instance of the class that breaks + /// a lease after the proposed duration. + /// + /// The proposed duration, in seconds, for the lease before it is broken. Value may + /// be between 0 and 60 seconds. + lease_break_period(const std::chrono::seconds& seconds) + : m_seconds(seconds) + { + if (seconds != std::chrono::seconds::max()) + { + utility::assert_in_bounds(_XPLATSTR("seconds"), seconds, protocol::minimum_lease_break_period, protocol::maximum_lease_break_period); + } + } + +#if defined(_MSC_VER) && _MSC_VER < 1900 + // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, + // have implicitly-declared move constructor and move assignment operator. + + /// + /// Initializes a new instance of the class based on an existing instance. + /// + /// An existing object. + lease_break_period(lease_break_period&& other) + { + *this = std::move(other); + } + + /// + /// Returns a reference to an object. + /// + /// An existing object to use to set properties. + /// An object with properties set. + lease_break_period& operator=(lease_break_period&& other) + { + if (this != &other) + { + m_seconds = std::move(other.m_seconds); + } + return *this; + } +#endif + + /// + /// Indicates whether the object is valid. + /// + /// true if the object is valid; otherwise, false. + bool is_valid() const + { + return m_seconds < std::chrono::seconds::max(); + } + + /// + /// Gets the proposed duration for the lease before it is broken. + /// + /// The proposed proposed duration for the lease before it is broken, in seconds. + const std::chrono::seconds& seconds() const + { + return m_seconds; + } + + private: + + std::chrono::seconds m_seconds; + }; + + /// + /// The lease duration for a Blob service resource. + /// + enum class lease_duration + { + /// + /// The lease duration is not specified. + /// + unspecified, + + /// + /// The lease duration is finite. + /// + fixed, + + /// + /// The lease duration is infinite. + /// + infinite, + }; + + /// + /// Specifies the duration of the lease. + /// + class lease_time + { + public: + /// + /// Initializes a new instance of the class that never expires. + /// + lease_time() + : m_seconds(-1) + { + } + + /// + /// Initializes a new instance of the class that expires after the + /// specified duration. + /// + /// The duration of the lease in seconds. For a non-infinite lease, this value can be + /// between 15 and 60 seconds. + lease_time(const std::chrono::seconds& seconds) + : m_seconds(seconds) + { + if (seconds.count() != -1) + { + utility::assert_in_bounds(_XPLATSTR("seconds"), seconds, protocol::minimum_fixed_lease_duration, protocol::maximum_fixed_lease_duration); + } + } + +#if defined(_MSC_VER) && _MSC_VER < 1900 + // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, + // have implicitly-declared move constructor and move assignment operator. + + /// + /// Initializes a new instance of the class based on an existing instance. + /// + /// An existing object. + lease_time(lease_time&& other) + { + *this = std::move(other); + } + + /// + /// Returns a reference to an object. + /// + /// An existing object to use to set properties. + /// An object with properties set. + lease_time& operator=(lease_time&& other) + { + if (this != &other) + { + m_seconds = std::move(other.m_seconds); + } + return *this; + } +#endif + + /// + /// Gets the duration of the lease in seconds for a non-infinite lease. + /// + /// The duration of the lease. + const std::chrono::seconds& seconds() const + { + return m_seconds; + } + + private: + + std::chrono::seconds m_seconds; + }; + +#ifdef _WIN32 + /// + /// Interface for scheduling tasks that start after a provided delay in milliseconds + /// + struct __declspec(novtable) delayed_scheduler_interface + { + virtual void schedule_after(pplx::TaskProc_t function, void* context, long long delayInMs) = 0; + }; + + /// + /// Sets the ambient scheduler to be used by the PPL constructs. Note this is not thread safe. + /// + WASTORAGE_API void __cdecl set_wastorage_ambient_scheduler(const std::shared_ptr& scheduler); + + /// + /// Gets the ambient scheduler to be used by the PPL constructs. Note this is not thread safe. + /// + WASTORAGE_API const std::shared_ptr __cdecl get_wastorage_ambient_scheduler(); + + /// + /// Sets the ambient scheduler to be used for scheduling delayed tasks. Note this is not thread safe. + /// + WASTORAGE_API void __cdecl set_wastorage_ambient_delayed_scheduler(const std::shared_ptr& scheduler); + + /// + /// Gets the ambient scheduler to be used for scheduling delayed tasks. Note this is not thread safe. + /// + WASTORAGE_API const std::shared_ptr& __cdecl get_wastorage_ambient_delayed_scheduler(); +#endif + }} // namespace azure::storage #ifndef _WIN32 #define UNREFERENCED_PARAMETER(P) (P) #endif + +#pragma pop_macro("max") diff --git a/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/stdafx.h b/Microsoft.WindowsAzure.Storage/includes/was/crc64.h similarity index 58% rename from Microsoft.WindowsAzure.Storage/samples/SamplesCommon/stdafx.h rename to Microsoft.WindowsAzure.Storage/includes/was/crc64.h index d9ffa5d3..70501b0d 100644 --- a/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/stdafx.h +++ b/Microsoft.WindowsAzure.Storage/includes/was/crc64.h @@ -1,6 +1,6 @@ // ----------------------------------------------------------------------------------------- -// -// Copyright 2013 Microsoft Corporation +// +// Copyright 2019 Microsoft Corporation // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. @@ -15,16 +15,21 @@ // // ----------------------------------------------------------------------------------------- -// stdafx.h : include file for standard system include files, -// or project specific include files that are used frequently, but -// are changed infrequently -// - #pragma once -#include "targetver.h" +#include "wascore/basic_types.h" + + +namespace azure { namespace storage { -#define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers + constexpr uint64_t INITIAL_CRC64 = 0ULL; + WASTORAGE_API uint64_t update_crc64(const uint8_t* data, size_t size, uint64_t crc); + WASTORAGE_API void set_crc64_func(std::function func); + inline uint64_t crc64(const uint8_t* data, size_t size) + { + return update_crc64(data, size, INITIAL_CRC64); + } +}} // namespace azure::storage \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/includes/was/file.h b/Microsoft.WindowsAzure.Storage/includes/was/file.h index 2cbcc83d..6e135c89 100644 --- a/Microsoft.WindowsAzure.Storage/includes/was/file.h +++ b/Microsoft.WindowsAzure.Storage/includes/was/file.h @@ -20,6 +20,9 @@ #include "limits" #include "service_client.h" +#pragma push_macro("max") +#undef max + namespace azure { namespace storage { class cloud_file; @@ -48,19 +51,83 @@ namespace azure { namespace storage { class file_access_condition { public: + /// + /// Constructs an empty file access condition. + /// file_access_condition() - : m_valid(false) { } +#if defined(_MSC_VER) && _MSC_VER < 1900 + // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, + // have implicitly-declared move constructor and move assignment operator. + + /// + /// Initializes a new instance of the class based on an existing instance. + /// + /// An existing object. + file_access_condition(file_access_condition&& other) + { + *this = std::move(other); + } + + /// + /// Returns a reference to an object. + /// + /// An existing object to use to set properties. + /// An object with properties set. + file_access_condition& operator=(file_access_condition&& other) + { + if (this != &other) + { + m_lease_id = std::move(other.m_lease_id); + } + return *this; + } +#endif + + /// + /// Generates a file access condition such that an operation will be performed only if the lease ID on the + /// resource matches the specified lease ID. + /// + /// The lease ID that must match the lease ID of the resource. + /// An object that represents the lease condition. + static file_access_condition generate_lease_condition(utility::string_t lease_id) + { + file_access_condition condition; + condition.set_lease_id(std::move(lease_id)); + return condition; + } + + /// + /// Returns if this condition is empty. + /// + /// true if this condition is empty, false otherwise. bool is_valid() const { - return m_valid; + return !m_lease_id.empty(); } - private: + /// + /// Gets a lease ID that must match the lease on a resource. + /// + /// A string containing the lease ID. + const utility::string_t& lease_id() const + { + return m_lease_id; + } + + /// + /// Sets a lease ID that must match the lease on a resource. + /// + /// A string containing the lease ID. + void set_lease_id(utility::string_t value) + { + m_lease_id = std::move(value); + } - bool m_valid; + private: + utility::string_t m_lease_id; }; /// @@ -339,7 +406,6 @@ namespace azure { namespace storage { /// Initializes a new instance of the class. /// cloud_file_share_properties() - : m_quota(0) { } @@ -359,6 +425,10 @@ namespace azure { namespace storage { m_quota = other.m_quota; m_etag = std::move(other.m_etag); m_last_modified = std::move(other.m_last_modified); + m_next_allowed_quota_downgrade_time = std::move(other.m_next_allowed_quota_downgrade_time); + m_provisioned_iops = std::move(other.m_provisioned_iops); + m_provisioned_ingress = std::move(other.m_provisioned_ingress); + m_provisioned_egress = std::move(other.m_provisioned_egress); } return *this; } @@ -401,11 +471,47 @@ namespace azure { namespace storage { return m_last_modified; } + /// + /// Gets the next allowed quota downgrade time for the share, expressed as a UTC value. + /// + /// The share's last-modified time, in UTC format. + utility::datetime next_allowed_quota_downgrade_time() const { + return m_next_allowed_quota_downgrade_time; + } + + /// + /// Gets the provisioned IOPS for the share. + /// + /// Allowed IOPS for this share. + utility::size64_t provisioned_iops() const { + return m_provisioned_iops; + } + + /// + /// Gets the allowed network ingress rate for the share. + /// + /// Allowed network ingress rate for the share, in MiB/s. + utility::size64_t provisioned_ingress() const { + return m_provisioned_ingress; + } + + /// + /// Gets the allowed network egress rate for the share. + /// + /// Allowed network egress rate for the share, in MiB/s. + utility::size64_t provisioned_egress() const { + return m_provisioned_egress; + } + private: - utility::size64_t m_quota; + utility::size64_t m_quota{ 0 }; utility::string_t m_etag; utility::datetime m_last_modified; + utility::datetime m_next_allowed_quota_downgrade_time; + utility::size64_t m_provisioned_iops{ 0 }; + utility::size64_t m_provisioned_ingress{ 0 }; + utility::size64_t m_provisioned_egress{ 0 }; void update_etag_and_last_modified(const cloud_file_share_properties& other); @@ -832,7 +938,8 @@ namespace azure { namespace storage { void initialize() { set_authentication_scheme(azure::storage::authentication_scheme::shared_key); - m_default_request_options.set_retry_policy(exponential_retry_policy()); + if (!m_default_request_options.retry_policy().is_valid()) + m_default_request_options.set_retry_policy(exponential_retry_policy()); } file_request_options m_default_request_options; @@ -1254,6 +1361,7 @@ namespace azure { namespace storage { /// Retrieves the share's statistics. /// /// The size number in gigabyte of used data for this share. + /// This method is deprecated in favor of download_shared_usage_in_bytes. int32_t download_share_usage() const { return download_share_usage_async().get(); @@ -1266,18 +1374,60 @@ namespace azure { namespace storage { /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. /// The size number in gigabyte of used data for this share. + /// This method is deprecated in favor of download_shared_usage_in_bytes. int32_t download_share_usage(const file_access_condition& condition, const file_request_options& options, operation_context context) const { - return download_share_usage_aysnc(condition, options, context).get(); + return download_share_usage_async(condition, options, context).get(); } /// /// Intitiates an asynchronous operation to retrieve the share's statistics. /// /// A object that that represents the current operation. + /// This method is deprecated in favor of download_shared_usage_in_bytes_async. pplx::task download_share_usage_async() const { - return download_share_usage_aysnc(file_access_condition(), file_request_options(), operation_context()); + return download_share_usage_async(file_access_condition(), file_request_options(), operation_context()); + } + + /// + /// Intitiates an asynchronous operation to retrieve the share's statistics. + /// + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that that represents the current operation. + /// This method is deprecated in favor of download_shared_usage_in_bytes_async. + WASTORAGE_API pplx::task download_share_usage_async(const file_access_condition& condition, const file_request_options& options, operation_context context) const; + + /// + /// Retrieves the share's statistics. + /// + /// The size number in byte of used data for this share. + int64_t download_share_usage_in_bytes() const + { + return download_share_usage_in_bytes_async().get(); + } + + /// + /// Retrieves the share's statistics. + /// + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// The size number in byte of used data for this share. + int64_t download_share_usage_in_bytes(const file_access_condition& condition, const file_request_options& options, operation_context context) const + { + return download_share_usage_in_bytes_async(condition, options, context).get(); + } + + /// + /// Intitiates an asynchronous operation to retrieve the share's statistics. + /// + /// A object that that represents the current operation. + pplx::task download_share_usage_in_bytes_async() const + { + return download_share_usage_in_bytes_async(file_access_condition(), file_request_options(), operation_context()); } /// @@ -1287,7 +1437,7 @@ namespace azure { namespace storage { /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. /// A object that that represents the current operation. - WASTORAGE_API pplx::task download_share_usage_aysnc(const file_access_condition& condition, const file_request_options& options, operation_context context) const; + WASTORAGE_API pplx::task download_share_usage_in_bytes_async(const file_access_condition& condition, const file_request_options& options, operation_context context) const; /// /// Gets permissions settings for the share. @@ -1369,6 +1519,92 @@ namespace azure { namespace storage { /// A object that that represents the current operation. WASTORAGE_API pplx::task upload_permissions_async(const file_share_permissions& permissions, const file_access_condition& condition, const file_request_options& options, operation_context context) const; + /// + /// Gets the Security Descriptor Definition Language (SDDL) for a given security descriptor. + /// + /// Security descriptor of the permission. + /// A object that contains permission in the Security Descriptor Definition Language (SDDL). + utility::string_t download_file_permission(const utility::string_t& permission_key) const + { + return download_file_permission_async(permission_key).get(); + } + + /// + /// Gets the Security Descriptor Definition Language (SDDL) for a given security descriptor. + /// + /// Security descriptor of the permission. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that contains permission in the Security Descriptor Definition Language (SDDL). + utility::string_t download_file_permission(const utility::string_t& permission_key, const file_access_condition& condition, const file_request_options& options, operation_context context) const + { + return download_file_permission_async(permission_key, condition, options, context).get(); + } + + /// + /// Intitiates an asynchronous operation to get the Security Descriptor Definition Language (SDDL) for a given security descriptor. + /// + /// Security descriptor of the permission. + /// A object that that represents the current operation. + pplx::task download_file_permission_async(const utility::string_t& permission_key) const + { + return download_file_permission_async(permission_key, file_access_condition(), file_request_options(), operation_context()); + } + + /// + /// Intitiates an asynchronous operation to get the Security Descriptor Definition Language (SDDL) for a given security descriptor. + /// + /// Security descriptor of the permission. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that that represents the current operation. + WASTORAGE_API pplx::task download_file_permission_async(const utility::string_t& permission_key, const file_access_condition& condition, const file_request_options& options, operation_context context) const; + + /// + /// Creates a permission in the share. The created security descriptor can be used for the files/directories in this share. + /// + /// A that contains permission in the Security Descriptor Definition Language (SDDL). + /// A that contains security descriptor of the permission. + utility::string_t upload_file_permission(const utility::string_t& permission) const + { + return upload_file_permission_async(permission).get(); + } + + /// + /// Creates a permission in the share. The created security descriptor can be used for the files/directories in this share. + /// + /// A that contains permission in the Security Descriptor Definition Language (SDDL). + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A that contains security descriptor of the permission. + utility::string_t upload_file_permission(const utility::string_t& permission, const file_access_condition& condition, const file_request_options& options, operation_context context) const + { + return upload_file_permission_async(permission, condition, options, context).get(); + } + + /// + /// Intitiates an asynchronous operation to creates a permission in the share. The created security descriptor can be used for the files/directories in this share. + /// + /// A that contains permission in the Security Descriptor Definition Language (SDDL). + /// A object that that represents the current operation. + pplx::task upload_file_permission_async(const utility::string_t& permission) const + { + return upload_file_permission_async(permission, file_access_condition(), file_request_options(), operation_context()); + } + + /// + /// Intitiates an asynchronous operation to creates a permission in the share. The created security descriptor can be used for the files/directories in this share. + /// + /// A that contains permission in the Security Descriptor Definition Language (SDDL). + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that that represents the current operation. + WASTORAGE_API pplx::task upload_file_permission_async(const utility::string_t& permission, const file_access_condition& condition, const file_request_options& options, operation_context context) const; + /// /// Resize the share. /// @@ -1545,11 +1781,46 @@ namespace azure { namespace storage { typedef result_segment list_file_and_directory_result_segment; typedef result_iterator list_file_and_diretory_result_iterator; - class list_file_and_directory_item; + /// + /// Valid set of file attributes. + /// + enum cloud_file_attributes : uint64_t + { + preserve = 0x0, + source = 0x1, + none = 0x2, + readonly = 0x4, + hidden = 0x8, + system = 0x10, + directory = 0x20, + archive = 0x40, + temporary = 0x80, + offline = 0x100, + not_content_indexed = 0x200, + no_scrub_data = 0x400, + }; + + inline cloud_file_attributes operator|(cloud_file_attributes lhs, cloud_file_attributes rhs) + { + return static_cast(static_cast(lhs) | static_cast(rhs)); + } + + inline cloud_file_attributes& operator|=(cloud_file_attributes& lhs, cloud_file_attributes rhs) + { + return lhs = lhs | rhs; + } class cloud_file_directory_properties { public: + + struct now_t {}; + struct inherit_t {}; + struct preserve_t {}; + static constexpr now_t now{}; + static constexpr inherit_t inherit{}; + static constexpr preserve_t preserve{}; + /// /// Initializes a new instance of the class. /// @@ -1572,6 +1843,19 @@ namespace azure { namespace storage { { m_etag = std::move(other.m_etag); m_last_modified = std::move(other.m_last_modified); + m_server_encrypted = std::move(other.m_server_encrypted); + m_permission = std::move(other.m_permission); + m_permission_key = std::move(other.m_permission_key); + m_attributes = std::move(other.m_attributes); + m_creation_time = std::move(other.m_creation_time); + m_creation_time_now = std::move(other.m_creation_time_now); + m_creation_time_preserve = std::move(other.m_creation_time_preserve); + m_last_write_time = std::move(other.m_last_write_time); + m_last_write_time_now = std::move(other.m_last_write_time_now); + m_last_write_time_preserve = std::move(other.m_last_write_time_preserve); + m_change_time = std::move(other.m_change_time); + m_file_id = std::move(other.m_file_id); + m_parent_id = std::move(other.m_parent_id); } return *this; } @@ -1596,139 +1880,405 @@ namespace azure { namespace storage { return m_last_modified; } - private: - - utility::string_t m_etag; - utility::datetime m_last_modified; - - void update_etag_and_last_modified(const cloud_file_directory_properties& other); - void update_etag(const cloud_file_directory_properties& other); - - friend class cloud_file_directory; - friend class protocol::file_response_parsers; - }; - - /// - /// Represents a directory of files. - /// - /// Shares, which are encapsulated as objects, hold directories, and directories hold files. Directories can also contain sub-directories. - class cloud_file_directory - { - public: - /// - /// Initializes a new instance of the class. + /// Gets if the server is encrypted. /// - cloud_file_directory() + /// true if a server is encrypted. + bool server_encrypted() { + return m_server_encrypted; } /// - /// Initializes a new instance of the class. + /// Sets if the server is encrypted. /// - /// An object containing the absolute URI to the directory for all locations. - WASTORAGE_API cloud_file_directory(storage_uri uri); + /// If the server is encrypted. + void set_server_encrypted(bool value) + { + m_server_encrypted = value; + } /// - /// Initializes a new instance of the class. + /// Gets the permission property. /// - /// An object containing the absolute URI to the directory for all locations. - /// The to use. - WASTORAGE_API cloud_file_directory(storage_uri uri, storage_credentials credentials); + /// A object that contains permission in the Security Descriptor Definition Language(SDDL). + const utility::string_t& permission() const + { + return m_permission; + } /// - /// Initializes a new instance of the class. + /// Sets the permission property. /// - /// The name of the directory. - /// The File share it blongs to. - WASTORAGE_API cloud_file_directory(utility::string_t name, cloud_file_share share); + /// A that contains permission in the Security Descriptor Definition Language (SDDL). + void set_permission(utility::string_t value) + { + m_permission = std::move(value); + m_permission_key.clear(); + } /// - /// Initializes a new instance of the class. + /// Sets the permission property value to inherit, which means to inherit from the parent directory. /// - /// The name of the directory. - /// The File share it blongs to. - /// A set of properties for the directory. - /// A collection of name-value pairs defining the metadata of the directory. - WASTORAGE_API cloud_file_directory(utility::string_t name, cloud_file_share share, cloud_file_directory_properties properties, cloud_metadata metadata); + /// Explicitly specified permission value, must be . + void set_permission(inherit_t) + { + m_permission = protocol::header_value_file_permission_inherit; + m_permission_key.clear(); + } /// - /// Initializes a new instance of the class. + /// Sets the permission property value to preserve, which means to keep existing value unchanged. /// - /// The name of the directory. - /// The File directory it blongs to. - WASTORAGE_API cloud_file_directory(utility::string_t name, cloud_file_directory directory); + /// Explicitly specified permission value, must be . + void set_permission(preserve_t) + { + m_permission = protocol::header_value_file_property_preserve; + m_permission_key.clear(); + } /// - /// Initializes a new instance of the class. + /// Gets security descriptor of the permission. /// - /// The name of the directory. - /// The File directory it blongs to. - /// A set of properties for the directory. - /// A collection of name-value pairs defining the metadata of the directory. - WASTORAGE_API cloud_file_directory(utility::string_t name, cloud_file_directory directory, cloud_file_directory_properties properties, cloud_metadata metadata); - -#if defined(_MSC_VER) && _MSC_VER < 1900 - // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, - // have implicitly-declared move constructor and move assignment operator. - - cloud_file_directory(cloud_file_directory&& other) + /// A that contains security descriptor of the permission. + const utility::string_t& permission_key() const { - *this = std::move(other); + return m_permission_key; } - cloud_file_directory& operator=(cloud_file_directory&& other) + /// + /// Sets security descriptor of permission. + /// + /// A that contains security descriptor of the permission. + void set_permission_key(utility::string_t value) { - if (this != &other) - { - m_name = std::move(other.m_name); - m_share = std::move(other.m_share); - m_uri = std::move(other.m_uri); - m_metadata = std::move(other.m_metadata); - m_properties = std::move(other.m_properties); - } - return *this; + m_permission.clear(); + m_permission_key = std::move(value); } -#endif /// - /// Returns an that can be used to to lazily enumerate a collection of file or directory items. + /// Gets file system attributes set on this directory. /// - /// An that can be used to to lazily enumerate a collection of file or directory items in the the directory. - list_file_and_diretory_result_iterator list_files_and_directories() const + /// An that represents a set of attributes. + cloud_file_attributes attributes() const { - return list_files_and_directories(0); + return m_attributes; } /// - /// Returns an that can be used to to lazily enumerate a collection of file or directory items. + /// Sets file system attributes on this directory. /// - /// A non-negative integer value that indicates the maximum number of results to be returned. - /// If this value is zero, the maximum possible number of results will be returned. - /// An that can be used to to lazily enumerate a collection of file or directory items in the the directory. - list_file_and_diretory_result_iterator list_files_and_directories(int64_t max_results) const + /// An that represents a set of attributes. + void set_attributes(cloud_file_attributes value) { - return list_files_and_directories(max_results, file_request_options(), operation_context()); + m_attributes = value; } - - /// - /// Returns an that can be used to to lazily enumerate a collection of file or directory items. - /// - /// A non-negative integer value that indicates the maximum number of results to be returned. - /// If this value is zero, the maximum possible number of results will be returned. - /// An object that specifies additional options for the request. - /// An object that represents the context for the current operation. - /// An that can be used to to lazily enumerate a collection of file or directory items in the the directory. - WASTORAGE_API list_file_and_diretory_result_iterator list_files_and_directories(int64_t max_results, const file_request_options& options, operation_context context) const; - + /// - /// Returns a result segment that can be used to to lazily enumerate a collection of file or directory items. + /// Gets the creation time property for this directory. /// - /// A continuation token returned by a previous listing operation. - /// An that can be used to to lazily enumerate a collection of file or directory items in the the directory. - list_file_and_directory_result_segment list_files_and_directories_segmented(const continuation_token& token) const + /// An ISO 8601 datetime . + utility::datetime creation_time() const { - return list_files_and_directories_segmented_async(token).get(); + return m_creation_time; + } + + /// + /// Sets the creation time property for this directory. + /// + /// An ISO 8601 datetime . + void set_creation_time(utility::datetime value) + { + m_creation_time = std::move(value); + m_creation_time_now = false; + m_creation_time_preserve = false; + } + + /// + /// Sets the creation time property for this directory to now, which indicates the time of the request. + /// + /// Explicitly specified datetime value, must be . + void set_creation_time(now_t) + { + m_creation_time = utility::datetime(); + m_creation_time_now = true; + m_creation_time_preserve = false; + } + + /// + /// Sets the creation time property for this directory to preserve, which means to keep the existing value unchanged. + /// + /// Explicitly specified datetime value, must be . + void set_creation_time(preserve_t) + { + m_creation_time = utility::datetime(); + m_creation_time_now = false; + m_creation_time_preserve = true; + } + + /// + /// Gets the last write time property for this directory. + /// + /// An ISO 8601 datetime . + utility::datetime last_write_time() const + { + return m_last_write_time; + } + + /// + /// Sets the last write time property for this directory. + /// + /// An ISO 8601 datetime . + void set_last_write_time(utility::datetime value) + { + m_last_write_time = std::move(value); + m_last_write_time_now = false; + m_last_write_time_preserve = false; + } + + /// + /// Sets the last write time property for this directory to now, which indicates the time of the request. + /// + /// Explicitly specified datetime value, must be . + void set_last_write_time(now_t) + { + m_last_write_time = utility::datetime(); + m_last_write_time_now = true; + m_last_write_time_preserve = false; + } + + /// + /// Sets the last write time property for this directory to preserve, which means to keep the existing value unchanged. + /// + /// Explicitly specified datetime value, must be . + void set_last_write_time(preserve_t) + { + m_last_write_time = utility::datetime(); + m_last_write_time_now = false; + m_last_write_time_preserve = true; + } + + /// + /// Gets the change time property for this directory. + /// + /// An ISO 8601 datetime . + utility::datetime chang_time() const + { + return m_change_time; + } + + /// + /// Gets the file id property for this directory. + /// + /// A contains the file id. + const utility::string_t& file_id() const + { + return m_file_id; + } + + /// + /// Gets the parent file id property for this directory. + /// + /// A contains the parent file id. + const utility::string_t& file_parent_id() const + { + return m_parent_id; + } + + private: + + utility::string_t m_etag; + utility::datetime m_last_modified; + + bool m_server_encrypted{ false }; + + utility::string_t m_permission; + utility::string_t m_permission_key; + cloud_file_attributes m_attributes{ cloud_file_attributes::preserve }; + utility::datetime m_creation_time; + bool m_creation_time_now{ false }; + bool m_creation_time_preserve{ true }; + utility::datetime m_last_write_time; + bool m_last_write_time_now{ false }; + bool m_last_write_time_preserve{ true }; + utility::datetime m_change_time; + utility::string_t m_file_id; + utility::string_t m_parent_id; + + void update_etag_and_last_modified(const cloud_file_directory_properties& other); + + friend class cloud_file_directory; + friend class protocol::file_response_parsers; + }; + + /// + /// Represents a directory of files. + /// + /// Shares, which are encapsulated as objects, hold directories, and directories hold files. Directories can also contain sub-directories. + class cloud_file_directory + { + public: + + /// + /// Initializes a new instance of the class. + /// + cloud_file_directory() + { + } + + /// + /// Initializes a new instance of the class. + /// + /// An object containing the absolute URI to the directory for all locations. + WASTORAGE_API cloud_file_directory(storage_uri uri); + + /// + /// Initializes a new instance of the class. + /// + /// An object containing the absolute URI to the directory for all locations. + /// The to use. + WASTORAGE_API cloud_file_directory(storage_uri uri, storage_credentials credentials); + + /// + /// Initializes a new instance of the class. + /// + /// The name of the directory. + /// The File share it blongs to. + WASTORAGE_API cloud_file_directory(utility::string_t name, cloud_file_share share); + + /// + /// Initializes a new instance of the class. + /// + /// The name of the directory. + /// The File share it blongs to. + /// A set of properties for the directory. + /// A collection of name-value pairs defining the metadata of the directory. + WASTORAGE_API cloud_file_directory(utility::string_t name, cloud_file_share share, cloud_file_directory_properties properties, cloud_metadata metadata); + + /// + /// Initializes a new instance of the class. + /// + /// The name of the directory. + /// The File directory it blongs to. + WASTORAGE_API cloud_file_directory(utility::string_t name, cloud_file_directory directory); + + /// + /// Initializes a new instance of the class. + /// + /// The name of the directory. + /// The File directory it blongs to. + /// A set of properties for the directory. + /// A collection of name-value pairs defining the metadata of the directory. + WASTORAGE_API cloud_file_directory(utility::string_t name, cloud_file_directory directory, cloud_file_directory_properties properties, cloud_metadata metadata); + +#if defined(_MSC_VER) && _MSC_VER < 1900 + // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, + // have implicitly-declared move constructor and move assignment operator. + + cloud_file_directory(cloud_file_directory&& other) + { + *this = std::move(other); + } + + cloud_file_directory& operator=(cloud_file_directory&& other) + { + if (this != &other) + { + m_name = std::move(other.m_name); + m_share = std::move(other.m_share); + m_uri = std::move(other.m_uri); + m_metadata = std::move(other.m_metadata); + m_properties = std::move(other.m_properties); + } + return *this; + } +#endif + + /// + /// Returns an that can be used to to lazily enumerate a collection of file or directory items. + /// + /// An that can be used to to lazily enumerate a collection of file or directory items in the the directory. + list_file_and_diretory_result_iterator list_files_and_directories() const + { + return list_files_and_directories(0); + } + + /// + /// Returns an that can be used to to lazily enumerate a collection of file or directory items. + /// + /// The file/directory name prefix. + /// An that can be used to to lazily enumerate a collection of file or directory items in the the directory. + list_file_and_diretory_result_iterator list_files_and_directories(const utility::string_t& prefix) const + { + return list_files_and_directories(prefix, 0); + } + + /// + /// Returns an that can be used to to lazily enumerate a collection of file or directory items. + /// + /// A non-negative integer value that indicates the maximum number of results to be returned. + /// If this value is zero, the maximum possible number of results will be returned. + /// An that can be used to to lazily enumerate a collection of file or directory items in the the directory. + list_file_and_diretory_result_iterator list_files_and_directories(int64_t max_results) const + { + return list_files_and_directories(utility::string_t(), max_results); + } + + /// + /// Returns an that can be used to to lazily enumerate a collection of file or directory items. + /// + /// The file/directory name prefix. + /// A non-negative integer value that indicates the maximum number of results to be returned. + /// If this value is zero, the maximum possible number of results will be returned. + /// An that can be used to to lazily enumerate a collection of file or directory items in the the directory. + list_file_and_diretory_result_iterator list_files_and_directories(const utility::string_t& prefix, int64_t max_results) const + { + return list_files_and_directories(prefix, max_results, file_request_options(), operation_context()); + } + + /// + /// Returns an that can be used to to lazily enumerate a collection of file or directory items. + /// + /// A non-negative integer value that indicates the maximum number of results to be returned. + /// If this value is zero, the maximum possible number of results will be returned. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An that can be used to to lazily enumerate a collection of file or directory items in the the directory. + list_file_and_diretory_result_iterator list_files_and_directories(int64_t max_results, const file_request_options& options, operation_context context) const + { + return list_files_and_directories(utility::string_t(), max_results, options, context); + } + + /// + /// Returns an that can be used to to lazily enumerate a collection of file or directory items. + /// + /// The file/directory name prefix. + /// A non-negative integer value that indicates the maximum number of results to be returned. + /// If this value is zero, the maximum possible number of results will be returned. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An that can be used to to lazily enumerate a collection of file or directory items in the the directory. + WASTORAGE_API list_file_and_diretory_result_iterator list_files_and_directories(const utility::string_t& prefix, int64_t max_results, const file_request_options& options, operation_context context) const; + + /// + /// Returns a result segment that can be used to to lazily enumerate a collection of file or directory items. + /// + /// A continuation token returned by a previous listing operation. + /// An that can be used to to lazily enumerate a collection of file or directory items in the the directory. + list_file_and_directory_result_segment list_files_and_directories_segmented(const continuation_token& token) const + { + return list_files_and_directories_segmented_async(token).get(); + } + + /// + /// Returns a result segment that can be used to to lazily enumerate a collection of file or directory items. + /// + /// The file/directory name prefix. + /// A continuation token returned by a previous listing operation. + /// An that can be used to to lazily enumerate a collection of file or directory items in the the directory. + list_file_and_directory_result_segment list_files_and_directories_segmented(const utility::string_t& prefix, const continuation_token& token) const + { + return list_files_and_directories_segmented_async(prefix, token).get(); } /// @@ -1744,6 +2294,20 @@ namespace azure { namespace storage { return list_files_and_directories_segmented_async(max_results, token, options, context).get(); } + /// + /// Returns a result segment that can be used to to lazily enumerate a collection of file or directory items. + /// + /// The file/directory name prefix. + /// A non-negative integer value that indicates the maximum number of results to be returned. + /// A continuation token returned by a previous listing operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An that can be used to to lazily enumerate a collection of file or directory items in the the directory. + list_file_and_directory_result_segment list_files_and_directories_segmented(const utility::string_t& prefix, int64_t max_results, const continuation_token& token, const file_request_options& options, operation_context context) const + { + return list_files_and_directories_segmented_async(prefix, max_results, token, options, context).get(); + } + /// /// Intitiates an asynchronous operation to return a result segment that can be used to to lazily enumerate a collection of file or directory items. /// @@ -1754,6 +2318,17 @@ namespace azure { namespace storage { return list_files_and_directories_segmented_async(0, token, file_request_options(), operation_context()); } + /// + /// Intitiates an asynchronous operation to return a result segment that can be used to to lazily enumerate a collection of file or directory items. + /// + /// The file/directory name prefix. + /// A continuation token returned by a previous listing operation. + /// A object of type that represents the current operation. + pplx::task list_files_and_directories_segmented_async(const utility::string_t& prefix, const continuation_token& token) const + { + return list_files_and_directories_segmented_async(prefix, 0, token, file_request_options(), operation_context()); + } + /// /// Intitiates an asynchronous operation to return a result segment that can be used to to lazily enumerate a collection of file or directory items. /// @@ -1762,7 +2337,21 @@ namespace azure { namespace storage { /// An object that specifies additional options for the request. /// An object that represents the context for the current operation. /// A object of type that represents the current operation. - WASTORAGE_API pplx::task list_files_and_directories_segmented_async(int64_t max_results, const continuation_token& token, const file_request_options& options, operation_context context) const; + pplx::task list_files_and_directories_segmented_async(int64_t max_results, const continuation_token& token, const file_request_options& options, operation_context context) const + { + return list_files_and_directories_segmented_async(utility::string_t(), max_results, token, options, context); + } + + /// + /// Intitiates an asynchronous operation to return a result segment that can be used to to lazily enumerate a collection of file or directory items. + /// + /// The file/directory name prefix. + /// A non-negative integer value that indicates the maximum number of results to be returned. + /// A continuation token returned by a previous listing operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + WASTORAGE_API pplx::task list_files_and_directories_segmented_async(const utility::string_t& prefix, int64_t max_results, const continuation_token& token, const file_request_options& options, operation_context context) const; /// /// Creates the directory. @@ -2011,6 +2600,43 @@ namespace azure { namespace storage { /// A object that that represents the current operation. WASTORAGE_API pplx::task download_attributes_async(const file_access_condition& condition, const file_request_options& options, operation_context context); + /// + /// Updates the directory's properties. + /// + void upload_properties() const + { + upload_properties_async().wait(); + } + + /// + /// Updates the directory's properties. + /// + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + void upload_properties(const file_access_condition& condition, const file_request_options& options, operation_context context) const + { + upload_properties_async(condition, options, context).wait(); + } + + /// + /// Intitiates an asynchronous operation to update the directory's properties. + /// + /// A object that that represents the current operation. + pplx::task upload_properties_async() const + { + return upload_properties_async(file_access_condition(), file_request_options(), operation_context()); + } + + /// + /// Intitiates an asynchronous operation to update the directory's properties. + /// + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that that represents the current operation. + WASTORAGE_API pplx::task upload_properties_async(const file_access_condition& condition, const file_request_options& options, operation_context context) const; + /// /// Uploads the directory's metadata. /// @@ -2164,200 +2790,472 @@ namespace azure { namespace storage { { public: + struct now_t {}; + struct inherit_t {}; + struct preserve_t {}; + struct source_t {}; + static constexpr now_t now{}; + static constexpr inherit_t inherit{}; + static constexpr preserve_t preserve{}; + static constexpr source_t source{}; + + /// + /// Initializes a new instance of the class. + /// + cloud_file_properties() + : m_lease_status(azure::storage::lease_status::unspecified), m_lease_state(azure::storage::lease_state::unspecified), + m_lease_duration(azure::storage::lease_duration::unspecified) + { + } + +#if defined(_MSC_VER) && _MSC_VER < 1900 + // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, + // have implicitly-declared move constructor and move assignment operator. + + cloud_file_properties(cloud_file_properties&& other) + { + *this = std::move(other); + } + + cloud_file_properties& operator=(cloud_file_properties&& other) + { + if (this != &other) + { + m_length = other.m_length; + m_etag = std::move(other.m_etag); + m_last_modified = std::move(other.m_last_modified); + + m_type = std::move(other.m_type); + m_content_type = std::move(other.m_content_type); + m_content_encoding = std::move(other.m_content_encoding); + m_content_language = std::move(other.m_content_language); + m_cache_control = std::move(other.m_cache_control); + m_content_md5 = std::move(other.m_content_md5); + m_content_disposition = std::move(other.m_content_disposition); + m_server_encrypted = std::move(other.m_server_encrypted); + + m_permission = std::move(other.m_permission); + m_permission_key = std::move(other.m_permission_key); + m_attributes = std::move(other.m_attributes); + m_creation_time = std::move(other.m_creation_time); + m_creation_time_now = std::move(other.m_creation_time_now); + m_creation_time_preserve = std::move(other.m_creation_time_preserve); + m_last_write_time = std::move(other.m_last_write_time); + m_last_write_time_now = std::move(other.m_last_write_time_now); + m_last_write_time_preserve = std::move(other.m_last_write_time_preserve); + m_change_time = std::move(other.m_change_time); + m_file_id = std::move(other.m_file_id); + m_parent_id = std::move(other.m_parent_id); + m_lease_status = std::move(other.m_lease_status); + m_lease_state = std::move(other.m_lease_state); + m_lease_duration = std::move(other.m_lease_duration); + } + return *this; + } + +#endif + + /// + /// Gets the size of the file, in bytes. + /// + /// The file's size in bytes. + utility::size64_t length() const + { + return m_length; + } + + /// + /// Gets the size of the file, in bytes. + /// + /// The file's size in bytes. + utility::size64_t size() const + { + return m_length; + } + + /// + /// Gets the file's ETag value. + /// + /// The file's ETag value. + const utility::string_t& etag() const + { + return m_etag; + } + + /// + /// Gets the last-modified time for the file, expressed as a UTC value. + /// + /// The file's last-modified time, in UTC format. + utility::datetime last_modified() const + { + return m_last_modified; + } + + /// + /// Gets the type of the file. + /// + /// An object that indicates the type of the file. + const utility::string_t& type() const + { + return m_type; + } + + /// + /// Gets the content-type value stored for the file. + /// + /// The file's content-type value. + const utility::string_t& content_type() const + { + return m_content_type; + } + + /// + /// Sets the content-type value stored for the file. + /// + /// The file's content-type value. + void set_content_type(utility::string_t content_type) + { + m_content_type = std::move(content_type); + } + + /// + /// Gets the content-encoding value stored for the file. + /// + /// The file's content-encoding value. + const utility::string_t& content_encoding() const + { + return m_content_encoding; + } + + /// + /// Sets the content-encoding value stored for the file. + /// + /// The file's content-encoding value. + void set_content_encoding(utility::string_t value) + { + m_content_encoding = std::move(value); + } + + /// + /// Gets the content-language value stored for the file. + /// + /// The file's content-language value. + const utility::string_t& content_language() const + { + return m_content_language; + } + + /// + /// Sets the content-language value stored for the file. + /// + /// The file's content-language value. + void set_content_language(utility::string_t value) + { + m_content_language = std::move(value); + } + + /// + /// Gets the cache-control value stored for the file. + /// + /// The file's cache-control value. + const utility::string_t& cache_control() const + { + return m_cache_control; + } + + /// + /// Sets the cache-control value stored for the file. + /// + /// The file's cache-control value. + void set_cache_control(utility::string_t value) + { + m_cache_control = std::move(value); + } + + /// + /// Gets the content-MD5 value stored for the file. + /// + /// The file's content-MD5 hash. + const utility::string_t& content_md5() const + { + return m_content_md5; + } + + /// + /// Sets the content-MD5 value stored for the file. + /// + /// The file's content-MD5 hash. + void set_content_md5(utility::string_t value) + { + m_content_md5 = std::move(value); + } + + /// + /// Gets the content-disposition value stored for the file. + /// + /// The file's content-disposition value. + const utility::string_t& content_disposition() const + { + return m_content_disposition; + } + + /// + /// Sets the content-disposition value stored for the file. + /// + /// The file's content-disposition value. + void set_content_disposition(utility::string_t value) + { + m_content_disposition = std::move(value); + } + + /// + /// Gets if the server is encrypted. + /// + /// true if a server is encrypted. + bool server_encrypted() const + { + return m_server_encrypted; + } + /// - /// Initializes a new instance of the class. + /// Sets if the server is encrypted. /// - cloud_file_properties() - : m_length(0) + /// If the server is encrypted. + void set_server_encrypted(bool value) { + m_server_encrypted = value; } -#if defined(_MSC_VER) && _MSC_VER < 1900 - // Compilers that fully support C++ 11 rvalue reference, e.g. g++ 4.8+, clang++ 3.3+ and Visual Studio 2015+, - // have implicitly-declared move constructor and move assignment operator. + /// + /// Gets the permission property. + /// + /// A object that contains permission in the Security Descriptor Definition Language(SDDL). + const utility::string_t& permission() const + { + return m_permission; + } - cloud_file_properties(cloud_file_properties&& other) + /// + /// Sets the permission property. + /// + /// A that contains permission in the Security Descriptor Definition Language (SDDL). + void set_permission(utility::string_t value) { - *this = std::move(other); + m_permission = std::move(value); + m_permission_key.clear(); } - cloud_file_properties& operator=(cloud_file_properties&& other) + /// + /// Sets the permission property value to inherit, which means to inherit from the parent directory. + /// + /// Explicitly specified permission value, must be . + void set_permission(inherit_t) { - if (this != &other) - { - m_length = other.m_length; - m_etag = std::move(other.m_etag); - m_last_modified = std::move(other.m_last_modified); + m_permission = protocol::header_value_file_permission_inherit; + m_permission_key.clear(); + } - m_type = std::move(other.m_type); - m_content_type = std::move(other.m_content_type); - m_content_encoding = std::move(other.m_content_encoding); - m_content_language = std::move(other.m_content_language); - m_cache_control = std::move(other.m_cache_control); - m_content_md5 = std::move(other.m_content_md5); - m_content_disposition = std::move(other.m_content_disposition); - } - return *this; + /// + /// Sets the permission property value to preserve, which means to keep existing value unchanged. + /// + /// Explicitly specified permission value, must be . + void set_permission(preserve_t) + { + m_permission = protocol::header_value_file_property_preserve; + m_permission_key.clear(); } -#endif + /// + /// Sets the permission property value to source, which means security descriptor shall be set for the target file by copying from the source file. + /// + /// Explicitly specified permission value, must be . + /// + /// This only applies to copy operation. + /// + void set_permission(source_t) + { + m_permission = protocol::header_value_file_property_source; + m_permission_key.clear(); + } /// - /// Gets the size of the file, in bytes. + /// Gets security descriptor of the permission. /// - /// The file's size in bytes. - utility::size64_t length() const + /// A that contains security descriptor of the permission. + const utility::string_t& permission_key() const { - return m_length; + return m_permission_key; } /// - /// Gets the size of the file, in bytes. + /// Sets security descriptor of permission. /// - /// The file's size in bytes. - utility::size64_t size() const + /// A that contains security descriptor of the permission. + void set_permission_key(utility::string_t value) { - return m_length; + m_permission.clear(); + m_permission_key = std::move(value); } /// - /// Gets the file's ETag value. + /// Gets file system attributes set on this file. /// - /// The file's ETag value. - const utility::string_t& etag() const + /// An that represents a set of attributes. + cloud_file_attributes attributes() const { - return m_etag; + return m_attributes; } /// - /// Gets the last-modified time for the file, expressed as a UTC value. + /// Sets file system attributes on this file. /// - /// The file's last-modified time, in UTC format. - utility::datetime last_modified() const + /// An that represents a set of attributes. + void set_attributes(cloud_file_attributes value) { - return m_last_modified; + m_attributes = value; } /// - /// Gets the type of the file. + /// Gets the creation time property for this file. /// - /// An object that indicates the type of the file. - const utility::string_t& type() const + /// An ISO 8601 datetime . + utility::datetime creation_time() const { - return m_type; + return m_creation_time; } /// - /// Gets the content-type value stored for the file. + /// Sets the creation time property for this file. /// - /// The file's content-type value. - const utility::string_t& content_type() const + /// An ISO 8601 datetime . + void set_creation_time(utility::datetime value) { - return m_content_type; + m_creation_time = std::move(value); + m_creation_time_now = false; + m_creation_time_preserve = false; } /// - /// Sets the content-type value stored for the file. + /// Sets the creation time property for this file to now, which indicates the time of the request. /// - /// The file's content-type value. - void set_content_type(utility::string_t content_type) + /// Explicitly specified datetime value, must be . + void set_creation_time(now_t) { - m_content_type = std::move(content_type); + m_creation_time = utility::datetime(); + m_creation_time_now = true; + m_creation_time_preserve = false; } /// - /// Gets the content-encoding value stored for the file. + /// Sets the creation time property for this file to preserve, which means to keep the existing value unchanged. /// - /// The file's content-encoding value. - const utility::string_t& content_encoding() const + /// Explicitly specified datetime value, must be . + void set_creation_time(preserve_t) { - return m_content_encoding; + m_creation_time = utility::datetime(); + m_creation_time_now = false; + m_creation_time_preserve = true; } /// - /// Sets the content-encoding value stored for the file. + /// Gets the last write time property for this file. /// - /// The file's content-encoding value. - void set_content_encoding(utility::string_t value) + /// An ISO 8601 datetime . + utility::datetime last_write_time() const { - m_content_encoding = std::move(value); + return m_last_write_time; } /// - /// Gets the content-language value stored for the file. + /// Sets the last write time property for this file. /// - /// The file's content-language value. - const utility::string_t& content_language() const + /// An ISO 8601 datetime . + void set_last_write_time(utility::datetime value) { - return m_content_language; + m_last_write_time = std::move(value); + m_last_write_time_now = false; + m_last_write_time_preserve = false; } /// - /// Sets the content-language value stored for the file. + /// Sets the last write time property for this file to now, which indicates the time of the request. /// - /// The file's content-language value. - void set_content_language(utility::string_t value) + /// Explicitly specified datetime value, must be . + void set_last_write_time(now_t) { - m_content_language = std::move(value); + m_last_write_time = utility::datetime(); + m_last_write_time_now = true; + m_last_write_time_preserve = false; } /// - /// Gets the cache-control value stored for the file. + /// Sets the last write time property for this file to preserve, which means to keep the existing value unchanged. /// - /// The file's cache-control value. - const utility::string_t& cache_control() const + /// Explicitly specified datetime value, must be . + void set_last_write_time(preserve_t) { - return m_cache_control; + m_last_write_time = utility::datetime(); + m_last_write_time_now = false; + m_last_write_time_preserve = true; } /// - /// Sets the cache-control value stored for the file. + /// Gets the change time property for this file. /// - /// The file's cache-control value. - void set_cache_control(utility::string_t value) + /// An ISO 8601 datetime . + utility::datetime change_time() const { - m_cache_control = std::move(value); + return m_change_time; } /// - /// Gets the content-MD5 value stored for the file. + /// Gets the file id property for this file. /// - /// The file's content-MD5 hash. - const utility::string_t& content_md5() const + /// A contains the file id. + const utility::string_t& file_id() const { - return m_content_md5; + return m_file_id; } /// - /// Sets the content-MD5 value stored for the file. + /// Gets the parent file id property for this file. /// - /// The file's content-MD5 hash. - void set_content_md5(utility::string_t value) + /// A contains the parent file id. + const utility::string_t& file_parent_id() const { - m_content_md5 = std::move(value); + return m_parent_id; } /// - /// Gets the content-disposition value stored for the file. + /// Gets the file's lease status. /// - /// The file's content-disposition value. - const utility::string_t& content_disposition() const + /// An object that indicates the file's lease status. + azure::storage::lease_status lease_status() const { - return m_content_disposition; + return m_lease_status; } /// - /// Sets the content-disposition value stored for the file. + /// Gets the file's lease state. /// - /// The file's content-disposition value. - void set_content_disposition(utility::string_t value) + /// An object that indicates the file's lease state. + azure::storage::lease_state lease_state() const { - m_content_disposition = std::move(value); + return m_lease_state; + } + + /// + /// Gets the file's lease duration. + /// + /// An object that indicates the file's lease duration. + azure::storage::lease_duration lease_duration() const + { + return m_lease_duration; } private: - utility::size64_t m_length; + utility::size64_t m_length{ 0 }; utility::string_t m_etag; utility::datetime m_last_modified; @@ -2369,11 +3267,31 @@ namespace azure { namespace storage { utility::string_t m_content_md5; utility::string_t m_content_disposition; + bool m_server_encrypted{ false }; + + utility::string_t m_permission; + utility::string_t m_permission_key; + cloud_file_attributes m_attributes{ cloud_file_attributes::preserve }; + utility::datetime m_creation_time; + bool m_creation_time_now{ false }; + bool m_creation_time_preserve{ true }; + utility::datetime m_last_write_time; + bool m_last_write_time_now{ false }; + bool m_last_write_time_preserve{ true }; + utility::datetime m_change_time; + utility::string_t m_file_id; + utility::string_t m_parent_id; + azure::storage::lease_status m_lease_status; + azure::storage::lease_state m_lease_state; + azure::storage::lease_duration m_lease_duration; + void update_etag_and_last_modified(const cloud_file_properties& other); - void update_etag(const cloud_file_properties& other); + void update_acl_attributes_filetime_and_fileid(const cloud_file_properties& other); + void update_lease(const cloud_file_properties& other); friend class cloud_file; friend class protocol::file_response_parsers; + friend class list_file_and_directory_item; }; enum class file_range_write @@ -3918,6 +4836,224 @@ namespace azure { namespace storage { return *m_copy_state; } + /// + /// Acquires a lease on the file. + /// + /// A string representing the proposed lease ID for the new lease. May be an empty string if no lease ID is proposed. + /// A string containing the lease ID. + utility::string_t acquire_lease(const utility::string_t& proposed_lease_id) const + { + return acquire_lease_async(proposed_lease_id).get(); + } + + /// + /// Acquires a lease on the file. + /// + /// A string representing the proposed lease ID for the new lease. May be an empty string if no lease ID is proposed. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A string containing the lease ID. + utility::string_t acquire_lease(const utility::string_t& proposed_lease_id, const file_access_condition& condition, const file_request_options& options, operation_context context) const + { + return acquire_lease_async(proposed_lease_id, condition, options, context).get(); + } + + /// + /// Initiates an asynchronous operation to acquire a lease on the file. + /// + /// A string representing the proposed lease ID for the new lease. May be an empty string if no lease ID is proposed. + /// A object of type that represents the current operation. + pplx::task acquire_lease_async(const utility::string_t& proposed_lease_id) const + { + return acquire_lease_async(proposed_lease_id, file_access_condition(), file_request_options(), operation_context()); + } + + /// + /// Initiates an asynchronous operation to acquire a lease on the file. + /// + /// A string representing the proposed lease ID for the new lease. May be an empty string if no lease ID is proposed. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task acquire_lease_async(const utility::string_t& proposed_lease_id, const file_access_condition& condition, const file_request_options& options, operation_context context) const + { + return acquire_lease_async(proposed_lease_id, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to acquire a lease on the file. + /// + /// A string representing the proposed lease ID for the new lease. May be an empty string if no lease ID is proposed. + /// An object that represents the access condition for the operation. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + WASTORAGE_API pplx::task acquire_lease_async(const utility::string_t& proposed_lease_id, const file_access_condition& condition, const file_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; + + /// + /// Changes the lease ID on the file. + /// + /// A string containing the proposed lease ID for the lease. May not be empty. + /// An object that represents the access conditions for the file, including a required lease ID. + /// The new lease ID. + utility::string_t change_lease(const utility::string_t& proposed_lease_id, const file_access_condition& condition) const + { + return change_lease_async(proposed_lease_id, condition).get(); + } + + /// + /// Changes the lease ID on the file. + /// + /// A string containing the proposed lease ID for the lease. May not be empty. + /// An object that represents the access conditions for the file, including a required lease ID. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// The new lease ID. + utility::string_t change_lease(const utility::string_t& proposed_lease_id, const file_access_condition& condition, const file_request_options& options, operation_context context) const + { + return change_lease_async(proposed_lease_id, condition, options, context).get(); + } + + /// + /// Initiates an asynchronous operation to change the lease ID on the file. + /// + /// A string containing the proposed lease ID for the lease. May not be empty. + /// An object that represents the access conditions for the file, including a required lease ID. + /// A object of type that represents the current operation. + pplx::task change_lease_async(const utility::string_t& proposed_lease_id, const file_access_condition& condition) const + { + return change_lease_async(proposed_lease_id, condition, file_request_options(), operation_context()); + } + + /// + /// Initiates an asynchronous operation to change the lease ID on the file. + /// + /// A string containing the proposed lease ID for the lease. May not be empty. + /// An object that represents the access conditions for the file, including a required lease ID. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object of type that represents the current operation. + pplx::task change_lease_async(const utility::string_t& proposed_lease_id, const file_access_condition& condition, const file_request_options& options, operation_context context) const + { + return change_lease_async(proposed_lease_id, condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to change the lease ID on the file. + /// + /// A string containing the proposed lease ID for the lease. May not be empty. + /// An object that represents the access conditions for the file, including a required lease ID. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object of type that represents the current operation. + WASTORAGE_API pplx::task change_lease_async(const utility::string_t& proposed_lease_id, const file_access_condition& condition, const file_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; + + /// + /// Releases the lease on the file. + /// + /// An object that represents the access conditions for the file, including a required lease ID. + void release_lease(const file_access_condition& condition) const + { + release_lease_async(condition).wait(); + } + + /// + /// Releases the lease on the file. + /// + /// An object that represents the access conditions for the file, including a required lease ID. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + void release_lease(const file_access_condition& condition, const file_request_options& options, operation_context context) const + { + release_lease_async(condition, options, context).wait(); + } + + /// + /// Initiates an asynchronous operation to release the lease on the file. + /// + /// An object that represents the access conditions for the file, including a required lease ID. + /// A object that represents the current operation. + pplx::task release_lease_async(const file_access_condition& condition) const + { + return release_lease_async(condition, file_request_options(), operation_context()); + } + + /// + /// Initiates an asynchronous operation to release the lease on the file. + /// + /// An object that represents the access conditions for the file, including a required lease ID. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task release_lease_async(const file_access_condition& condition, const file_request_options& options, operation_context context) const + { + return release_lease_async(condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to release the lease on the file. + /// + /// An object that represents the access conditions for the file, including a required lease ID. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + WASTORAGE_API pplx::task release_lease_async(const file_access_condition& condition, const file_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; + + /// + /// Breaks the current lease on the file. + /// + void break_lease() const + { + return break_lease_async().get(); + } + + /// + /// Breaks the current lease on the file. + /// + /// An object that represents the access conditions for the file, including a required lease ID. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + void break_lease(const file_access_condition& condition, const file_request_options& options, operation_context context) const + { + return break_lease_async(condition, options, context).get(); + } + + /// + /// Initiates an asynchronous operation to break the current lease on the file. + /// + /// A object that represents the current operation. + pplx::task break_lease_async() const + { + return break_lease_async(file_access_condition(), file_request_options(), operation_context()); + } + + /// + /// Initiates an asynchronous operation to break the current lease on the file. + /// + /// An object that represents the access conditions for the file, including a required lease ID. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// A object that represents the current operation. + pplx::task break_lease_async(const file_access_condition& condition, const file_request_options& options, operation_context context) const + { + return break_lease_async(condition, options, context, pplx::cancellation_token::none()); + } + + /// + /// Initiates an asynchronous operation to break the current lease on the file. + /// + /// An object that represents the access conditions for the file, including a required lease ID. + /// An object that specifies additional options for the request. + /// An object that represents the context for the current operation. + /// An object that is used to cancel the current operation. + /// A object that represents the current operation. + WASTORAGE_API pplx::task break_lease_async(const file_access_condition& condition, const file_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const; + private: void init(storage_credentials credentials); @@ -3983,7 +5119,9 @@ namespace azure { namespace storage { { throw std::runtime_error("Cannot access a cloud file directory as cloud file"); } - return cloud_file(m_name, m_directory); + cloud_file result = cloud_file(m_name, m_directory); + result.properties().m_length = static_cast(m_length); + return result; } /// @@ -4021,11 +5159,24 @@ namespace azure { namespace storage { return m_name; } + const utility::string_t& file_id() const + { + return m_file_id; + } + + void set_file_id(utility::string_t file_id) + { + m_file_id = std::move(file_id); + } + private: bool m_is_file; utility::string_t m_name; int64_t m_length; cloud_file_directory m_directory; + utility::string_t m_file_id; }; -}} // namespace azure::storage \ No newline at end of file +}} // namespace azure::storage + +#pragma pop_macro("max") diff --git a/Microsoft.WindowsAzure.Storage/includes/was/queue.h b/Microsoft.WindowsAzure.Storage/includes/was/queue.h index 6f9386b6..f07825e7 100644 --- a/Microsoft.WindowsAzure.Storage/includes/was/queue.h +++ b/Microsoft.WindowsAzure.Storage/includes/was/queue.h @@ -290,7 +290,7 @@ namespace azure { namespace storage { /// Returns the next time that the message will be visible. /// /// The next time that the message will be visible. - utility::datetime next_visibile_time() const + utility::datetime next_visible_time() const { return m_next_visible_time; } @@ -344,6 +344,8 @@ namespace azure { namespace storage { utility::datetime m_next_visible_time; int m_dequeue_count; + void update_message_info(const cloud_queue_message& message_info); + friend class cloud_queue; }; @@ -725,7 +727,8 @@ namespace azure { namespace storage { void initialize() { set_authentication_scheme(azure::storage::authentication_scheme::shared_key); - m_default_request_options.set_retry_policy(exponential_retry_policy()); + if (!m_default_request_options.retry_policy().is_valid()) + m_default_request_options.set_retry_policy(exponential_retry_policy()); } queue_request_options get_modified_options(const queue_request_options& options) const; diff --git a/Microsoft.WindowsAzure.Storage/includes/was/service_client.h b/Microsoft.WindowsAzure.Storage/includes/was/service_client.h index 49659fd2..5202d49a 100644 --- a/Microsoft.WindowsAzure.Storage/includes/was/service_client.h +++ b/Microsoft.WindowsAzure.Storage/includes/was/service_client.h @@ -144,9 +144,9 @@ namespace azure { namespace storage { m_authentication_handler = value; } - WASTORAGE_API pplx::task download_service_properties_base_async(const request_options& modified_options, operation_context context) const; - WASTORAGE_API pplx::task upload_service_properties_base_async(const service_properties& properties, const service_properties_includes& includes, const request_options& modified_options, operation_context context) const; - WASTORAGE_API pplx::task download_service_stats_base_async(const request_options& modified_options, operation_context context) const; + WASTORAGE_API pplx::task download_service_properties_base_async(const request_options& modified_options, operation_context context, const pplx::cancellation_token& cancellation_token = pplx::cancellation_token::none()) const; + WASTORAGE_API pplx::task upload_service_properties_base_async(const service_properties& properties, const service_properties_includes& includes, const request_options& modified_options, operation_context context, const pplx::cancellation_token& cancellation_token = pplx::cancellation_token::none()) const; + WASTORAGE_API pplx::task download_service_stats_base_async(const request_options& modified_options, operation_context context, const pplx::cancellation_token& cancellation_token = pplx::cancellation_token::none()) const; private: diff --git a/Microsoft.WindowsAzure.Storage/includes/was/table.h b/Microsoft.WindowsAzure.Storage/includes/was/table.h index 01eb4f18..f8d7e89e 100644 --- a/Microsoft.WindowsAzure.Storage/includes/was/table.h +++ b/Microsoft.WindowsAzure.Storage/includes/was/table.h @@ -577,6 +577,8 @@ namespace azure { namespace storage { /// The byte array value. void set_value(const std::vector& value) { + m_property_type = edm_type::binary; + m_is_null = false; set_value_impl(value); } @@ -1939,7 +1941,8 @@ namespace azure { namespace storage { void initialize() { set_authentication_scheme(azure::storage::authentication_scheme::shared_key); - m_default_request_options.set_retry_policy(exponential_retry_policy()); + if (!m_default_request_options.retry_policy().is_valid()) + m_default_request_options.set_retry_policy(exponential_retry_policy()); } table_request_options get_modified_options(const table_request_options& options) const; diff --git a/Microsoft.WindowsAzure.Storage/includes/wascore/blobstreams.h b/Microsoft.WindowsAzure.Storage/includes/wascore/blobstreams.h index 1bcf2a23..154b5a5c 100644 --- a/Microsoft.WindowsAzure.Storage/includes/wascore/blobstreams.h +++ b/Microsoft.WindowsAzure.Storage/includes/wascore/blobstreams.h @@ -29,11 +29,12 @@ namespace azure { namespace storage { namespace core { { public: - basic_cloud_blob_istreambuf(std::shared_ptr blob, const access_condition &condition, const blob_request_options& options, operation_context context) + basic_cloud_blob_istreambuf(std::shared_ptr blob, const access_condition &condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_request_level_timeout) : basic_istreambuf(), m_blob(blob), m_condition(condition), m_options(options), m_context(context), m_current_blob_offset(0), m_next_blob_offset(0), m_buffer_size(options.stream_read_size_in_bytes()), - m_next_buffer_size(options.stream_read_size_in_bytes()), m_buffer(std::ios_base::in) + m_next_buffer_size(options.stream_read_size_in_bytes()), m_buffer(std::ios_base::in), + m_cancellation_token(cancellation_token), m_use_request_level_timeout(use_request_level_timeout) { if (!options.disable_content_md5_validation() && !m_blob->properties().content_md5().empty()) { @@ -161,6 +162,8 @@ namespace azure { namespace storage { namespace core { off_type m_next_blob_offset; size_t m_buffer_size; size_t m_next_buffer_size; + bool m_use_request_level_timeout; + const pplx::cancellation_token m_cancellation_token; concurrency::streams::container_buffer> m_buffer; }; @@ -168,8 +171,8 @@ namespace azure { namespace storage { namespace core { class cloud_blob_istreambuf : public concurrency::streams::streambuf { public: - cloud_blob_istreambuf(std::shared_ptr blob, const access_condition &condition, const blob_request_options& options, operation_context context) - : concurrency::streams::streambuf(std::make_shared(blob, condition, options, context)) + cloud_blob_istreambuf(std::shared_ptr blob, const access_condition &condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_request_level_timeout) + : concurrency::streams::streambuf(std::make_shared(blob, condition, options, context, cancellation_token, use_request_level_timeout)) { } }; @@ -177,9 +180,9 @@ namespace azure { namespace storage { namespace core { class basic_cloud_blob_ostreambuf : public basic_cloud_ostreambuf { public: - basic_cloud_blob_ostreambuf(const access_condition &condition, const blob_request_options& options, operation_context context) + basic_cloud_blob_ostreambuf(const access_condition &condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_request_level_timeout, std::shared_ptr timer_handler) : basic_cloud_ostreambuf(), - m_condition(condition), m_options(options), m_context(context), m_semaphore(options.parallelism_factor()) + m_condition(condition), m_options(options), m_context(context), m_semaphore(options.parallelism_factor()), m_cancellation_token(cancellation_token), m_use_request_level_timeout(use_request_level_timeout), m_timer_handler(timer_handler) { m_buffer_size = options.stream_write_size_in_bytes(); m_next_buffer_size = options.stream_write_size_in_bytes(); @@ -188,6 +191,10 @@ namespace azure { namespace storage { namespace core { { m_transaction_hash_provider = hash_provider::create_md5_hash_provider(); } + else if (options.use_transactional_crc64()) + { + m_transaction_hash_provider = hash_provider::create_crc64_hash_provider(); + } if (options.store_blob_content_md5()) { @@ -203,14 +210,16 @@ namespace azure { namespace storage { namespace core { blob_request_options m_options; operation_context m_context; async_semaphore m_semaphore; - + bool m_use_request_level_timeout; + const pplx::cancellation_token m_cancellation_token; + std::shared_ptr m_timer_handler; }; class basic_cloud_block_blob_ostreambuf : public basic_cloud_blob_ostreambuf { public: - basic_cloud_block_blob_ostreambuf(std::shared_ptr blob, const access_condition &condition, const blob_request_options& options, operation_context context) - : basic_cloud_blob_ostreambuf(condition, options, context), + basic_cloud_block_blob_ostreambuf(std::shared_ptr blob, const access_condition &condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_request_level_timeout, std::shared_ptr timer_handler) + : basic_cloud_blob_ostreambuf(condition, options, context, cancellation_token, use_request_level_timeout, timer_handler), m_blob(blob), m_block_id_prefix(utility::uuid_to_string(utility::new_uuid())) { } @@ -255,8 +264,8 @@ namespace azure { namespace storage { namespace core { { public: - cloud_block_blob_ostreambuf(std::shared_ptr blob,const access_condition &condition, const blob_request_options& options, operation_context context) - : concurrency::streams::streambuf(std::make_shared(blob, condition, options, context)) + cloud_block_blob_ostreambuf(std::shared_ptr blob,const access_condition &condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_request_level_timeout, std::shared_ptr timer_handler) + : concurrency::streams::streambuf(std::make_shared(blob, condition, options, context, cancellation_token, use_request_level_timeout, timer_handler)) { } }; @@ -265,8 +274,8 @@ namespace azure { namespace storage { namespace core { { public: - basic_cloud_page_blob_ostreambuf(std::shared_ptr blob, utility::size64_t blob_size, const access_condition &condition, const blob_request_options& options, operation_context context) - : basic_cloud_blob_ostreambuf(condition, options, context), + basic_cloud_page_blob_ostreambuf(std::shared_ptr blob, utility::size64_t blob_size, const access_condition &condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_request_level_timeout, std::shared_ptr timer_handler) + : basic_cloud_blob_ostreambuf(condition, options, context, cancellation_token, use_request_level_timeout, timer_handler), m_blob(blob), m_blob_size(blob_size), m_current_blob_offset(0) { } @@ -322,8 +331,8 @@ namespace azure { namespace storage { namespace core { { public: - cloud_page_blob_ostreambuf(std::shared_ptr blob, utility::size64_t blob_size, const access_condition &condition, const blob_request_options& options, operation_context context) - : concurrency::streams::streambuf(std::make_shared(blob, blob_size, condition, options, context)) + cloud_page_blob_ostreambuf(std::shared_ptr blob, utility::size64_t blob_size, const access_condition &condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_request_level_timeout, std::shared_ptr timer_handler) + : concurrency::streams::streambuf(std::make_shared(blob, blob_size, condition, options, context, cancellation_token, use_request_level_timeout, timer_handler)) { } }; @@ -331,8 +340,8 @@ namespace azure { namespace storage { namespace core { class basic_cloud_append_blob_ostreambuf : public basic_cloud_blob_ostreambuf { public: - basic_cloud_append_blob_ostreambuf(std::shared_ptr blob, const access_condition &condition, const blob_request_options& options, operation_context context) - : basic_cloud_blob_ostreambuf(condition, options, context), + basic_cloud_append_blob_ostreambuf(std::shared_ptr blob, const access_condition &condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_request_level_timeout, std::shared_ptr timer_handler) + : basic_cloud_blob_ostreambuf(condition, options, context, cancellation_token, use_request_level_timeout, timer_handler), m_blob(blob), m_current_blob_offset(condition.append_position() == -1 ? blob->properties().size() : condition.append_position()) { m_semaphore = async_semaphore(1); @@ -375,8 +384,8 @@ namespace azure { namespace storage { namespace core { { public: - cloud_append_blob_ostreambuf(std::shared_ptr blob, const access_condition &condition, const blob_request_options& options, operation_context context) - : concurrency::streams::streambuf(std::make_shared(blob, condition, options, context)) + cloud_append_blob_ostreambuf(std::shared_ptr blob, const access_condition &condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_request_level_timeout, std::shared_ptr timer_handler) + : concurrency::streams::streambuf(std::make_shared(blob, condition, options, context, cancellation_token, use_request_level_timeout, timer_handler)) { } }; diff --git a/Microsoft.WindowsAzure.Storage/includes/wascore/constants.dat b/Microsoft.WindowsAzure.Storage/includes/wascore/constants.dat index 75229155..1860e049 100644 --- a/Microsoft.WindowsAzure.Storage/includes/wascore/constants.dat +++ b/Microsoft.WindowsAzure.Storage/includes/wascore/constants.dat @@ -31,6 +31,7 @@ DAT(service_file, _XPLATSTR("file")) DAT(uri_query_timeout, _XPLATSTR("timeout")) DAT(uri_query_resource_type, _XPLATSTR("restype")) DAT(uri_query_snapshot, _XPLATSTR("snapshot")) +DAT(uri_query_version_id, _XPLATSTR("versionid")) DAT(uri_query_prevsnapshot, _XPLATSTR("prevsnapshot")) DAT(uri_query_component, _XPLATSTR("comp")) DAT(uri_query_block_id, _XPLATSTR("blockid")) @@ -65,6 +66,12 @@ DAT(uri_query_sas_services, _XPLATSTR("ss")) DAT(uri_query_sas_resource_types, _XPLATSTR("srt")) DAT(uri_query_sas_ip, _XPLATSTR("sip")) DAT(uri_query_sas_protocol, _XPLATSTR("spr")) +DAT(uri_query_sas_skoid, _XPLATSTR("skoid")) +DAT(uri_query_sas_sktid, _XPLATSTR("sktid")) +DAT(uri_query_sas_skt, _XPLATSTR("skt")) +DAT(uri_query_sas_ske, _XPLATSTR("ske")) +DAT(uri_query_sas_sks, _XPLATSTR("sks")) +DAT(uri_query_sas_skv, _XPLATSTR("skv")) // table query parameters DAT(table_query_next_partition_key, _XPLATSTR("NextPartitionKey")) @@ -75,6 +82,7 @@ DAT(table_query_next_table_name, _XPLATSTR("NextTableName")) DAT(queue_query_next_marker, _XPLATSTR("marker")) // resource types +DAT(resource_account, _XPLATSTR("account")) DAT(resource_service, _XPLATSTR("service")) DAT(resource_container, _XPLATSTR("container")) DAT(resource_blob, _XPLATSTR("blob")) @@ -91,6 +99,7 @@ DAT(component_properties, _XPLATSTR("properties")) DAT(component_metadata, _XPLATSTR("metadata")) DAT(component_snapshot, _XPLATSTR("snapshot")) DAT(component_snapshots, _XPLATSTR("snapshots")) +DAT(component_versions, _XPLATSTR("versions")) DAT(component_uncommitted_blobs, _XPLATSTR("uncommittedblobs")) DAT(component_lease, _XPLATSTR("lease")) DAT(component_block, _XPLATSTR("block")) @@ -102,6 +111,10 @@ DAT(component_copy, _XPLATSTR("copy")) DAT(component_acl, _XPLATSTR("acl")) DAT(component_range_list, _XPLATSTR("rangelist")) DAT(component_range, _XPLATSTR("range")) +DAT(component_incrementalcopy, _XPLATSTR("incrementalcopy")) +DAT(component_tier, _XPLATSTR("tier")) +DAT(component_file_permission, _XPLATSTR("filepermission")) +DAT(component_user_delegation_key, _XPLATSTR("userdelegationkey")) // common resources DAT(root_container, _XPLATSTR("$root")) @@ -140,6 +153,7 @@ DAT(ms_header_copy_status, _XPLATSTR("x-ms-copy-status")) DAT(ms_header_copy_progress, _XPLATSTR("x-ms-copy-progress")) DAT(ms_header_copy_status_description, _XPLATSTR("x-ms-copy-status-description")) DAT(ms_header_copy_source, _XPLATSTR("x-ms-copy-source")) +DAT(ms_header_copy_ignore_readonly, _XPLATSTR("x-ms-copy-ignore-readonly")) DAT(ms_header_delete_snapshots, _XPLATSTR("x-ms-delete-snapshots")) DAT(ms_header_request_id, _XPLATSTR("x-ms-request-id")) DAT(ms_header_request_server_encrypted, _XPLATSTR("x-ms-request-server-encrypted")) @@ -147,6 +161,7 @@ DAT(ms_header_client_request_id, _XPLATSTR("x-ms-client-request-id")) DAT(ms_header_range, _XPLATSTR("x-ms-range")) DAT(ms_header_page_write, _XPLATSTR("x-ms-page-write")) DAT(ms_header_range_get_content_md5, _XPLATSTR("x-ms-range-get-content-md5")) +DAT(ms_header_range_get_content_crc64, _XPLATSTR("x-ms-range-get-content-crc64")) DAT(ms_header_lease_id, _XPLATSTR("x-ms-lease-id")) DAT(ms_header_lease_action, _XPLATSTR("x-ms-lease-action")) DAT(ms_header_lease_state, _XPLATSTR("x-ms-lease-state")) @@ -171,9 +186,43 @@ DAT(ms_header_approximate_messages_count, _XPLATSTR("x-ms-approximate-messages-c DAT(ms_header_pop_receipt, _XPLATSTR("x-ms-popreceipt")) DAT(ms_header_time_next_visible, _XPLATSTR("x-ms-time-next-visible")) DAT(ms_header_share_quota, _XPLATSTR("x-ms-share-quota")) +DAT(ms_header_content_md5, _XPLATSTR("x-ms-content-md5")) +DAT(ms_header_content_crc64, _XPLATSTR("x-ms-content-crc64")) +DAT(ms_header_incremental_copy, _XPLATSTR("x-ms-incremental-copy")) +DAT(ms_header_copy_destination_snapshot, _XPLATSTR("x-ms-copy-destination-snapshot")) +DAT(ms_header_access_tier, _XPLATSTR("x-ms-access-tier")) +DAT(ms_header_access_tier_inferred, _XPLATSTR("x-ms-access-tier-inferred")) +DAT(ms_header_archive_status, _XPLATSTR("x-ms-archive-status")) +DAT(ms_header_tier_change_time, _XPLATSTR("x-ms-access-tier-change-time")) +DAT(ms_header_sku_name, _XPLATSTR("x-ms-sku-name")) +DAT(ms_header_account_kind, _XPLATSTR("x-ms-account-kind")) +DAT(ms_header_content_type, _XPLATSTR("x-ms-content-type")) +DAT(ms_header_content_length, _XPLATSTR("x-ms-content-length")) +DAT(ms_header_content_encoding, _XPLATSTR("x-ms-content-encoding")) +DAT(ms_header_content_language, _XPLATSTR("x-ms-content-language")) +DAT(ms_header_cache_control, _XPLATSTR("x-ms-cache-control")) +DAT(ms_header_content_disposition, _XPLATSTR("x-ms-content-disposition")) +DAT(ms_header_file_permission, _XPLATSTR("x-ms-file-permission")) +DAT(ms_header_file_permission_key, _XPLATSTR("x-ms-file-permission-key")) +DAT(ms_header_file_permission_copy_mode, _XPLATSTR("x-ms-file-permission-copy-mode")) +DAT(ms_header_file_attributes, _XPLATSTR("x-ms-file-attributes")) +DAT(ms_header_file_creation_time, _XPLATSTR("x-ms-file-creation-time")) +DAT(ms_header_file_last_write_time, _XPLATSTR("x-ms-file-last-write-time")) +DAT(ms_header_file_change_time, _XPLATSTR("x-ms-file-change-time")) +DAT(ms_header_file_id, _XPLATSTR("x-ms-file-id")) +DAT(ms_header_file_parent_id, _XPLATSTR("x-ms-file-parent-id")) +DAT(ms_header_previous_snapshot_url, _XPLATSTR("x-ms-previous-snapshot-url")) +DAT(ms_header_encryption_key, _XPLATSTR("x-ms-encryption-key")) +DAT(ms_header_encryption_key_sha256, _XPLATSTR("x-ms-encryption-key-sha256")) +DAT(ms_header_encryption_algorithm, _XPLATSTR("x-ms-encryption-algorithm")) +DAT(ms_header_share_next_allowed_quota_downgrade_time, _XPLATSTR("x-ms-share-next-allowed-quota-downgrade-time")) +DAT(ms_header_share_provisioned_egress_mbps, _XPLATSTR("x-ms-share-provisioned-egress-mbps")) +DAT(ms_header_share_provisioned_ingress_mbps, _XPLATSTR("x-ms-share-provisioned-ingress-mbps")) +DAT(ms_header_share_provisioned_iops, _XPLATSTR("x-ms-share-provisioned-iops")) +DAT(ms_header_version_id, _XPLATSTR("x-ms-version-id")) // header values -DAT(header_value_storage_version, _XPLATSTR("2015-12-11")) +DAT(header_value_storage_version, _XPLATSTR("2019-12-12")) DAT(header_value_true, _XPLATSTR("true")) DAT(header_value_false, _XPLATSTR("false")) DAT(header_value_locked, _XPLATSTR("locked")) @@ -216,6 +265,37 @@ DAT(header_value_content_type_utf8, _XPLATSTR("text/plain; charset=utf-8")) DAT(header_value_content_type_mime_multipart_prefix, _XPLATSTR("multipart/mixed; boundary=")) DAT(header_value_content_type_http, _XPLATSTR("application/http")) DAT(header_value_content_transfer_encoding_binary, _XPLATSTR("binary")) +DAT(header_value_access_tier_hot, _XPLATSTR("Hot")) +DAT(header_value_access_tier_cool, _XPLATSTR("Cool")) +DAT(header_value_access_tier_archive, _XPLATSTR("Archive")) +DAT(header_value_access_tier_unknown, _XPLATSTR("Unknown")) +DAT(header_value_access_tier_p4, _XPLATSTR("P4")) +DAT(header_value_access_tier_p6, _XPLATSTR("P6")) +DAT(header_value_access_tier_p10, _XPLATSTR("P10")) +DAT(header_value_access_tier_p20, _XPLATSTR("P20")) +DAT(header_value_access_tier_p30, _XPLATSTR("P30")) +DAT(header_value_access_tier_p40, _XPLATSTR("P40")) +DAT(header_value_access_tier_p50, _XPLATSTR("P50")) +DAT(header_value_access_tier_p60, _XPLATSTR("P60")) +DAT(header_value_archive_status_to_hot, _XPLATSTR("rehydrate-pending-to-hot")) +DAT(header_value_archive_status_to_cool, _XPLATSTR("rehydrate-pending-to-cool")) +DAT(header_value_file_property_preserve, _XPLATSTR("preserve")) +DAT(header_value_file_property_source, _XPLATSTR("source")) +DAT(header_value_file_permission_inherit, _XPLATSTR("inherit")) +DAT(header_value_file_permission_override, _XPLATSTR("override")) +DAT(header_value_file_time_now, _XPLATSTR("now")) +DAT(header_value_file_attribute_none, _XPLATSTR("None")) +DAT(header_value_file_attribute_readonly, _XPLATSTR("ReadOnly")) +DAT(header_value_file_attribute_hidden, _XPLATSTR("Hidden")) +DAT(header_value_file_attribute_system, _XPLATSTR("System")) +DAT(header_value_file_attribute_directory, _XPLATSTR("Directory")) +DAT(header_value_file_attribute_archive, _XPLATSTR("Archive")) +DAT(header_value_file_attribute_temporary, _XPLATSTR("Temporary")) +DAT(header_value_file_attribute_offline, _XPLATSTR("Offline")) +DAT(header_value_file_attribute_notcontentindexed, _XPLATSTR("NotContentIndexed")) +DAT(header_value_file_attribute_noscrubdata, _XPLATSTR("NoScrubData")) +DAT(header_value_file_attribute_delimiter, _XPLATSTR(" | ")) +DAT(header_value_encryption_algorithm_aes256, _XPLATSTR("AES256")) // xml strings DAT(xml_last_modified, _XPLATSTR("Last-Modified")) @@ -223,6 +303,7 @@ DAT(xml_etag, _XPLATSTR("Etag")) DAT(xml_lease_status, _XPLATSTR("LeaseStatus")) DAT(xml_lease_state, _XPLATSTR("LeaseState")) DAT(xml_lease_duration, _XPLATSTR("LeaseDuration")) +DAT(xml_public_access, _XPLATSTR("PublicAccess")) DAT(xml_content_length, _XPLATSTR("Content-Length")) DAT(xml_content_disposition, _XPLATSTR("Content-Disposition")) DAT(xml_content_type, _XPLATSTR("Content-Type")) @@ -238,6 +319,8 @@ DAT(xml_copy_source, _XPLATSTR("CopySource")) DAT(xml_copy_progress, _XPLATSTR("CopyProgress")) DAT(xml_copy_completion_time, _XPLATSTR("CopyCompletionTime")) DAT(xml_copy_status_description, _XPLATSTR("CopyStatusDescription")) +DAT(xml_incremental_copy, _XPLATSTR("IncrementalCopy")) +DAT(xml_copy_destination_snapshot, _XPLATSTR("CopyDestinationSnapshot")) DAT(xml_next_marker, _XPLATSTR("NextMarker")) DAT(xml_containers, _XPLATSTR("Containers")) DAT(xml_container, _XPLATSTR("Container")) @@ -247,6 +330,8 @@ DAT(xml_blob_prefix, _XPLATSTR("BlobPrefix")) DAT(xml_properties, _XPLATSTR("Properties")) DAT(xml_metadata, _XPLATSTR("Metadata")) DAT(xml_snapshot, _XPLATSTR("Snapshot")) +DAT(xml_version_id, _XPLATSTR("VersionId")) +DAT(xml_is_current_version, _XPLATSTR("IsCurrentVersion")) DAT(xml_enumeration_results, _XPLATSTR("EnumerationResults")) DAT(xml_service_endpoint, _XPLATSTR("ServiceEndpoint")) DAT(xml_container_name, _XPLATSTR("ContainerName")) @@ -304,24 +389,48 @@ DAT(xml_service_stats_geo_replication_status_bootstrap, _XPLATSTR("bootstrap")) DAT(xml_service_stats_geo_replication_last_sync_time, _XPLATSTR("LastSyncTime")) DAT(xml_url, _XPLATSTR("Url")) DAT(xml_quota, _XPLATSTR("Quota")) +DAT(xml_provisioned_iops, _XPLATSTR("ProvisionedIops")) +DAT(xml_provisioned_ingress_mbps, _XPLATSTR("ProvisionedIngressMBps")) +DAT(xml_provisioned_egress_mpbs, _XPLATSTR("ProvisionedEgressMBps")) +DAT(xml_next_allowed_quota_downgrade_time, _XPLATSTR("NextAllowedQuotaDowngradeTime")) DAT(xml_range, _XPLATSTR("Range")) DAT(xml_share, _XPLATSTR("Share")) DAT(xml_shares, _XPLATSTR("Shares")) +DAT(xml_file_id, _XPLATSTR("Id")) +DAT(xml_access_tier, _XPLATSTR("AccessTier")) +DAT(xml_access_tier_inferred, _XPLATSTR("AccessTierInferred")) +DAT(xml_access_tier_change_time, _XPLATSTR("AccessTierChangeTime")) +DAT(xml_user_delegation_key, _XPLATSTR("UserDelegationKey")) +DAT(xml_user_delegation_key_signed_oid, _XPLATSTR("SignedOid")) +DAT(xml_user_delegation_key_signed_tid, _XPLATSTR("SignedTid")) +DAT(xml_user_delegation_key_signed_start, _XPLATSTR("SignedStart")) +DAT(xml_user_delegation_key_signed_expiry, _XPLATSTR("SignedExpiry")) +DAT(xml_user_delegation_key_signed_service, _XPLATSTR("SignedService")) +DAT(xml_user_delegation_key_signed_version, _XPLATSTR("SignedVersion")) +DAT(xml_user_delegation_key_value, _XPLATSTR("Value")) +DAT(xml_user_delegation_key_info, _XPLATSTR("KeyInfo")) +DAT(xml_user_delegation_key_start, _XPLATSTR("Start")) +DAT(xml_user_delegation_key_expiry, _XPLATSTR("Expiry")) + +// json strings +DAT(json_file_permission, _XPLATSTR("permission")) #define STR(x) #x -#define VER(x) _XPLATSTR("Azure-Storage/2.6.0 (Native; Windows; MSC_VER " STR(x) ")") +#define VER(x) _XPLATSTR("Azure-Storage/7.5.0 (Native; Windows; MSC_VER " STR(x) ")") #if defined(_WIN32) #if defined(_MSC_VER) - #if _MSC_VER == 1800 - DAT(header_value_user_agent, _XPLATSTR("Azure-Storage/2.6.0 (Native; Windows; MSC_VER 1800 )")) - #else + #if _MSC_VER >= 1900 DAT(header_value_user_agent, VER(_MSC_VER)) + #elif _MSC_VER >= 1800 + DAT(header_value_user_agent, _XPLATSTR("Azure-Storage/7.5.0 (Native; Windows; MSC_VER 18XX)")) + #else + DAT(header_value_user_agent, _XPLATSTR("Azure-Storage/7.5.0 (Native; Windows; MSC_VER < 1800)")) #endif #else - DAT(header_value_user_agent, _XPLATSTR("Azure-Storage/2.6.0 (Native; Windows)")) + DAT(header_value_user_agent, _XPLATSTR("Azure-Storage/7.5.0 (Native; Windows)")) #endif #else - DAT(header_value_user_agent, _XPLATSTR("Azure-Storage/2.6.0 (Native)")) + DAT(header_value_user_agent, _XPLATSTR("Azure-Storage/7.5.0 (Native)")) #endif #endif // _CONSTANTS @@ -331,15 +440,21 @@ DAT(error_blob_type_mismatch, "Blob type of the blob reference doesn't match blo DAT(error_closed_stream, "Cannot access a closed stream.") DAT(error_lease_id_on_source, "A lease condition cannot be specified on the source of a copy.") DAT(error_incorrect_length, "Incorrect number of bytes received.") +DAT(error_xml_not_complete, "The XML parsed is not complete.") +DAT(error_blob_over_max_block_limit, "The total blocks required for this upload exceeds the maximum block limit. Please increase the block size if applicable and ensure the Blob size is not greater than the maximum Blob size limit.") DAT(error_md5_mismatch, "Calculated MD5 does not match existing property.") +DAT(error_crc64_mismatch, "Calculated CRC64 does not match existing property.") DAT(error_missing_md5, "MD5 does not exist. If you do not want to force validation, please disable use_transactional_md5.") +DAT(error_missing_crc64, "CRC64 does not exist. If you do not want to force validation, please disable use_transactional_crc64.") DAT(error_sas_missing_credentials, "Cannot create Shared Access Signature unless Shared Key credentials are used.") +DAT(error_uds_missing_credentials, "Cannot create User Delegation SAS unless token credentials are used") DAT(error_client_timeout, "The client could not finish the operation within specified timeout.") DAT(error_cannot_modify_snapshot, "Cannot perform this operation on a blob representing a snapshot.") -DAT(error_page_blob_size_unknown, "The size of the page blob could not be determined, because stream is not seekable and a length argument is not provided.") -DAT(error_file_size_unknown, "The size of the file could not be determined, because stream is not seekable and a length argument is not provided.") +DAT(error_page_blob_size_unknown, "The size of the page blob could not be determined, because a length argument is not provided and stream is not seekable or stream length exceeds the permitted length.") +DAT(error_file_size_unknown, "The size of the file could not be determined, because a length argument is not provided and stream is not seekable or stream length exceeds the permitted length.") DAT(error_stream_short, "The requested number of bytes exceeds the length of the stream remaining from the specified position.") DAT(error_stream_length, "The length of the stream exceeds the permitted length.") +DAT(error_stream_length_unknown, "The length of the stream could not be determined, because the stream is not seekable or its length exceeds the permitted length.") DAT(error_unsupported_text_blob, "Only plain text with utf-8 encoding is supported.") DAT(error_unsupported_text, "Only plain text with utf-8 encoding is supported.") DAT(error_multiple_snapshots, "Cannot provide snapshot time as part of the address and as constructor parameter. Either pass in the address or use a different constructor.") @@ -353,7 +468,14 @@ DAT(error_md5_options_mismatch, "When uploading a blob in a single request, stor DAT(error_storage_uri_empty, "Primary or secondary location URI must be supplied.") DAT(error_storage_uri_mismatch, "Primary and secondary location URIs must point to the same resource.") +#if defined(_WIN32) +DAT(error_operation_canceled, "operation canceled") +#else +DAT(error_operation_canceled, "Operation canceled") +#endif + DAT(error_empty_batch_operation, "The batch operation cannot be empty.") +DAT(error_batch_size_not_match_response, "The received batch result size does not match the size of the batch operations sent to the server.") DAT(error_batch_operation_partition_key_mismatch, "The batch operation cannot contain entities with different partition keys.") DAT(error_batch_operation_retrieve_count, "The batch operation cannot contain more than one retrieve operation.") DAT(error_batch_operation_retrieve_mix, "The batch operation cannot contain any other operations when it contains a retrieve operation.") @@ -370,8 +492,7 @@ DAT(error_parse_int32, "An error occurred parsing the 32-bit integer.") DAT(error_entity_property_not_int64, "The type of the entity property is not 64-bit integer.") DAT(error_entity_property_not_string, "The type of the entity property is not string.") -DAT(error_non_positive_time_to_live, "The time to live cannot be zero or negative.") -DAT(error_large_time_to_live, "The time to live cannot be greater than 604800.") +DAT(error_invalid_value_time_to_live, "The time to live cannot be zero or any negative number other than -1.") DAT(error_negative_initial_visibility_timeout, "The initial visibility timeout cannot be negative.") DAT(error_large_initial_visibility_timeout, "The initial visibility timeout cannot be greater than 604800.") DAT(error_negative_visibility_timeout, "The visibility timeout cannot be negative.") @@ -386,6 +507,7 @@ DAT(error_free_uuid, "An error occurred freeing the UUID string.") DAT(error_parse_uuid, "An error occurred parsing the UUID.") DAT(error_empty_metadata_value, "The metadata value cannot be empty or consist entirely of whitespace.") +DAT(error_empty_whitespace_metadata_name, "The metadata name cannot contain any whitespaces or be empty.") DAT(error_hash_on_closed_streambuf, "Hash is calculated when the streambuf is closed.") DAT(error_invalid_settings_form, "Settings must be of the form \"name=value\".") diff --git a/Microsoft.WindowsAzure.Storage/includes/wascore/constants.h b/Microsoft.WindowsAzure.Storage/includes/wascore/constants.h index 9eaa1c05..dfb82e22 100644 --- a/Microsoft.WindowsAzure.Storage/includes/wascore/constants.h +++ b/Microsoft.WindowsAzure.Storage/includes/wascore/constants.h @@ -24,12 +24,22 @@ namespace azure { namespace storage { namespace protocol { // size constants - const size_t max_block_size = 4 * 1024 * 1024; - const size_t single_block_size = 4 * 1024 * 1024; + const size_t max_block_number = 50000; + const utility::size64_t max_block_size = 4 * 1000 * 1024 * 1024ULL; + const utility::size64_t max_block_blob_size = static_cast(max_block_number) * max_block_size; + const size_t max_append_block_size = 4 * 1024 * 1024; + const size_t max_page_size = 4 * 1024 * 1024; + const size_t max_range_size = 4 * 1024 * 1024; + const utility::size64_t max_single_blob_upload_threshold = 5000 * 1024 * 1024ULL; + + const size_t default_stream_write_size = 4 * 1024 * 1024; + const size_t default_stream_read_size = 4 * 1024 * 1024; const size_t default_buffer_size = 64 * 1024; - const utility::size64_t default_single_blob_upload_threshold = 32 * 1024 * 1024; + const bool default_validate_certificates = true; + const utility::size64_t default_single_blob_upload_threshold = 128 * 1024 * 1024; const utility::size64_t default_single_blob_download_threshold = 32 * 1024 * 1024; const utility::size64_t default_single_block_download_threshold = 4 * 1024 * 1024; + const size_t transactional_md5_block_size = 4 * 1024 * 1024; // duration constants const std::chrono::seconds default_retry_interval(3); @@ -37,7 +47,7 @@ namespace azure { namespace storage { namespace protocol { // that Casablanca 2.2.0 on Linux can accept, which is derived from // the maximum value for a signed long on g++, divided by 1000. // Choosing to set it to 24 days to align with .NET. - const std::chrono::seconds default_maximum_execution_time(24 * 24 * 60 * 60); + const std::chrono::milliseconds default_maximum_execution_time(24 * 24 * 60 * 60 * 1000); // the following value is used to exit the network connection if there is no activity in network. const std::chrono::seconds default_noactivity_timeout(60); // For the following value, "0" means "don't send a timeout to the service" @@ -49,11 +59,8 @@ namespace azure { namespace storage { namespace protocol { const std::chrono::seconds minimum_fixed_lease_duration(15); const std::chrono::seconds maximum_fixed_lease_duration(60); - // could file share limitation - const int maximum_share_quota(5120); - #define _CONSTANTS -#define DAT(a,b) extern WASTORAGE_API const utility::char_t* a; const size_t a ## _size{ sizeof(b) / sizeof(utility::char_t) - 1 }; +#define DAT(a, b) WASTORAGE_API extern const utility::char_t a[]; const size_t a ## _size = sizeof(b) / sizeof(utility::char_t) - 1; #include "constants.dat" #undef DAT #undef _CONSTANTS diff --git a/Microsoft.WindowsAzure.Storage/includes/wascore/executor.h b/Microsoft.WindowsAzure.Storage/includes/wascore/executor.h index dd8afa28..639f55f7 100644 --- a/Microsoft.WindowsAzure.Storage/includes/wascore/executor.h +++ b/Microsoft.WindowsAzure.Storage/includes/wascore/executor.h @@ -24,7 +24,12 @@ #include "util.h" #include "streams.h" #include "was/auth.h" +#include "wascore/constants.h" #include "wascore/resources.h" +#include "wascore/timer_handler.h" + +#pragma push_macro("max") +#undef max namespace azure { namespace storage { namespace core { @@ -37,7 +42,7 @@ namespace azure { namespace storage { namespace core { { } - static pplx::task create(concurrency::streams::istream stream, bool calculate_md5 = false, utility::size64_t length = std::numeric_limits::max(), utility::size64_t max_length = std::numeric_limits::max()) + static pplx::task create(concurrency::streams::istream stream, checksum_type calculate_checksum = checksum_type::none, utility::size64_t length = std::numeric_limits::max(), utility::size64_t max_length = std::numeric_limits::max(), const pplx::cancellation_token& cancellation_token = pplx::cancellation_token::none()) { if (length == std::numeric_limits::max()) { @@ -49,16 +54,25 @@ namespace azure { namespace storage { namespace core { throw std::invalid_argument(protocol::error_stream_length); } - if (!calculate_md5 && stream.can_seek()) + if (calculate_checksum == checksum_type::none && stream.can_seek()) { - return pplx::task_from_result(istream_descriptor(stream, length, utility::string_t())); + return pplx::task_from_result(istream_descriptor(stream, length, checksum(checksum_none))); } - hash_provider provider = calculate_md5 ? core::hash_provider::create_md5_hash_provider() : core::hash_provider(); + hash_provider provider = core::hash_provider(); + + if (calculate_checksum == checksum_type::md5) + { + provider = core::hash_provider::create_md5_hash_provider(); + } + else if (calculate_checksum == checksum_type::crc64) + { + provider = core::hash_provider::create_crc64_hash_provider(); + } concurrency::streams::container_buffer> temp_buffer; concurrency::streams::ostream temp_stream; - if (calculate_md5) + if (calculate_checksum != checksum_type::none) { temp_stream = hash_wrapper_streambuf(temp_buffer, provider).create_ostream(); } @@ -67,10 +81,11 @@ namespace azure { namespace storage { namespace core { temp_stream = temp_buffer.create_ostream(); } - return stream_copy_async(stream, temp_stream, length, max_length).then([temp_buffer, provider] (pplx::task buffer_task) mutable -> istream_descriptor + return stream_copy_async(stream, temp_stream, length, max_length, cancellation_token).then([temp_buffer, provider] (pplx::task buffer_task) mutable -> istream_descriptor { + auto length = buffer_task.get(); provider.close(); - return istream_descriptor(concurrency::streams::container_stream>::open_istream(temp_buffer.collection()), buffer_task.get(), provider.hash()); + return istream_descriptor(concurrency::streams::container_stream>::open_istream(temp_buffer.collection()), length, provider.hash()); }); } @@ -89,9 +104,9 @@ namespace azure { namespace storage { namespace core { return m_length; } - const utility::string_t& content_md5() const + const checksum& content_checksum() const { - return m_content_md5; + return m_content_checksum; } void rewind() @@ -101,14 +116,14 @@ namespace azure { namespace storage { namespace core { private: - istream_descriptor(concurrency::streams::istream stream, utility::size64_t length, utility::string_t content_md5) - : m_stream(stream), m_offset(stream.tell()), m_length(length), m_content_md5(std::move(content_md5)) + istream_descriptor(concurrency::streams::istream stream, utility::size64_t length, checksum content_checksum) + : m_stream(stream), m_offset(stream.tell()), m_length(length), m_content_checksum(std::move(content_checksum)) { } concurrency::streams::istream m_stream; concurrency::streams::istream::pos_type m_offset; - utility::string_t m_content_md5; + checksum m_content_checksum; utility::size64_t m_length; }; @@ -121,8 +136,8 @@ namespace azure { namespace storage { namespace core { { } - ostream_descriptor(utility::size64_t length, utility::string_t content_md5) - : m_length(length), m_content_md5(std::move(content_md5)) + ostream_descriptor(utility::size64_t length, checksum content_checksum) + : m_length(length), m_content_checksum(std::move(content_checksum)) { } @@ -131,14 +146,14 @@ namespace azure { namespace storage { namespace core { return m_length; } - const utility::string_t& content_md5() const + const checksum& content_checksum() const { - return m_content_md5; + return m_content_checksum; } private: - utility::string_t m_content_md5; + checksum m_content_checksum; utility::size64_t m_length; }; @@ -153,11 +168,23 @@ namespace azure { namespace storage { namespace core { { public: - explicit storage_command_base(const storage_uri& request_uri) - : m_request_uri(request_uri), m_location_mode(command_location_mode::primary_only) + explicit storage_command_base(const storage_uri& request_uri, const pplx::cancellation_token& cancellation_token, const bool use_timeout, std::shared_ptr timer_handler) + : m_request_uri(request_uri), m_location_mode(command_location_mode::primary_only), + m_cancellation_token(cancellation_token), m_calculate_response_body_checksum(checksum_type::none), m_use_timeout(use_timeout), m_timer_handler(timer_handler) { + if (m_use_timeout) + { + m_timer_handler = std::make_shared(m_cancellation_token); + } } +#if defined(_MSC_VER) && _MSC_VER < 1900 + + // Prevents the compiler from generating default assignment operator. + storage_command_base& operator=(storage_command_base& other) = delete; + +#endif + void set_request_body(istream_descriptor value) { m_request_body = value; @@ -173,9 +200,9 @@ namespace azure { namespace storage { namespace core { m_destination_stream = value; } - void set_calculate_response_body_md5(bool value) + void set_calculate_response_body_checksum(checksum_type value) { - m_calculate_response_body_md5 = value; + m_calculate_response_body_checksum = value; } void set_build_request(std::function value) @@ -226,6 +253,30 @@ namespace azure { namespace storage { namespace core { } } + bool is_canceled() + { + if (m_use_timeout) + { + return m_timer_handler->is_canceled(); + } + else + { + return m_cancellation_token.is_canceled(); + } + } + + const pplx::cancellation_token get_cancellation_token() const + { + if (m_use_timeout) + { + return m_timer_handler->get_cancellation_token(); + } + else + { + return m_cancellation_token; + } + } + private: virtual void preprocess_response(const web::http::http_response&, const request_result&, operation_context) = 0; @@ -234,9 +285,13 @@ namespace azure { namespace storage { namespace core { storage_uri m_request_uri; istream_descriptor m_request_body; concurrency::streams::ostream m_destination_stream; - bool m_calculate_response_body_md5; + checksum_type m_calculate_response_body_checksum; command_location_mode m_location_mode; + const pplx::cancellation_token m_cancellation_token; + std::shared_ptr m_timer_handler; + bool m_use_timeout; + std::function m_build_request; std::function m_sign_request; std::function m_recover_request; @@ -249,8 +304,8 @@ namespace azure { namespace storage { namespace core { { public: - explicit storage_command(const storage_uri& request_uri) - : storage_command_base(request_uri) + explicit storage_command(const storage_uri& request_uri, const pplx::cancellation_token& cancellation_token = pplx::cancellation_token::none(), const bool use_timeout = false, std::shared_ptr timer_handler = nullptr) + : storage_command_base(request_uri, cancellation_token, use_timeout, timer_handler) { } @@ -302,8 +357,8 @@ namespace azure { namespace storage { namespace core { { public: - explicit storage_command(const storage_uri& request_uri) - : storage_command_base(request_uri) + explicit storage_command(const storage_uri& request_uri, const pplx::cancellation_token& cancellation_token = pplx::cancellation_token::none(), const bool use_timeout = false, std::shared_ptr timer_handler = nullptr) + : storage_command_base(request_uri, cancellation_token, use_timeout, timer_handler) { } @@ -347,347 +402,23 @@ namespace azure { namespace storage { namespace core { executor_impl(std::shared_ptr command, const request_options& options, operation_context context) : m_command(command), m_request_options(options), m_context(context), m_is_hashing_started(false), m_total_downloaded(0), m_retry_count(0), m_current_location(get_first_location(options.location_mode())), - m_current_location_mode(options.location_mode()), m_retry_policy(options.retry_policy().clone()) + m_current_location_mode(options.location_mode()), m_retry_policy(options.retry_policy().clone()), + m_should_restart_hash_provider(false) { } - static pplx::task execute_async(std::shared_ptr command, const request_options& options, operation_context context) - { - if (!context.start_time().is_initialized()) - { - context.set_start_time(utility::datetime::utc_now()); - } - - // TODO: Use "it" variable name for iterators in for loops - // TODO: Reduce usage of auto variable types - - auto instance = std::make_shared(command, options, context); - return pplx::details::do_while([instance]() -> pplx::task - { - // 0. Begin request - instance->validate_location_mode(); - - // 1. Build request - instance->m_start_time = utility::datetime::utc_now(); - instance->m_uri_builder = web::http::uri_builder(instance->m_command->m_request_uri.get_location_uri(instance->m_current_location)); - instance->m_request = instance->m_command->m_build_request(instance->m_uri_builder, instance->m_request_options.server_timeout(), instance->m_context); - instance->m_request_result = request_result(instance->m_start_time, instance->m_current_location); - - if (logger::instance().should_log(instance->m_context, client_log_level::log_level_informational)) - { - utility::string_t str; - str.reserve(256); - str.append(_XPLATSTR("Starting ")).append(instance->m_request.method()).append(_XPLATSTR(" request to ")).append(instance->m_request.request_uri().to_string()); - logger::instance().log(instance->m_context, client_log_level::log_level_informational, str); - } - - // 2. Set Headers - auto& client_request_id = instance->m_context.client_request_id(); - if (!client_request_id.empty()) - { - instance->add_request_header(protocol::ms_header_client_request_id, client_request_id); - } - - auto& user_headers = instance->m_context.user_headers(); - for (auto iter = user_headers.begin(); iter != user_headers.end(); ++iter) - { - instance->add_request_header(iter->first, iter->second); - } - - // If the command provided a request body, set it on the http_request object - if (instance->m_command->m_request_body.is_valid()) - { - instance->m_command->m_request_body.rewind(); - instance->m_request.set_body(instance->m_command->m_request_body.stream(), instance->m_command->m_request_body.length(), utility::string_t()); - } - - // If the command wants to copy the response body to a stream, set it - // on the http_request object - if (instance->m_command->m_destination_stream) - { - // Calculate the length and MD5 hash if needed as the incoming data is read - if (!instance->m_is_hashing_started) - { - if (instance->m_command->m_calculate_response_body_md5) - { - instance->m_hash_provider = hash_provider::create_md5_hash_provider(); - } - - instance->m_total_downloaded = 0; - instance->m_is_hashing_started = true; - - // TODO: Consider using hash_provider::is_enabled instead of m_is_hashing_started to signal when the hash provider has been closed - } - - instance->m_response_streambuf = hash_wrapper_streambuf(instance->m_command->m_destination_stream.streambuf(), instance->m_hash_provider); - instance->m_request.set_response_stream(instance->m_response_streambuf.create_ostream()); - } - - // Let the user know we are ready to send - auto sending_request = instance->m_context._get_impl()->sending_request(); - if (sending_request) - { - sending_request(instance->m_request, instance->m_context); - } - - // 3. Sign Request - instance->m_command->m_sign_request(instance->m_request, instance->m_context); - - // 4. Set HTTP client configuration - web::http::client::http_client_config config; - if (instance->m_context.proxy().is_specified()) - { - config.set_proxy(instance->m_context.proxy()); - } - - instance->remaining_time(); - config.set_timeout(instance->m_request_options.noactivity_timeout()); - - size_t http_buffer_size = instance->m_request_options.http_buffer_size(); - if (http_buffer_size > 0) - { - config.set_chunksize(http_buffer_size); - } - - // 5-6. Potentially upload data and get response -#ifdef _WIN32 - web::http::client::http_client client(instance->m_request.request_uri().authority(), config); - return client.request(instance->m_request).then([instance](pplx::task get_headers_task)->pplx::task -#else - std::shared_ptr client = core::http_client_reusable::get_http_client(instance->m_request.request_uri().authority(), config); - return client->request(instance->m_request).then([instance](pplx::task get_headers_task) -> pplx::task -#endif // _WIN32 - { - // Headers are ready. It should be noted that http_client will - // continue to download the response body in parallel. - auto response = get_headers_task.get(); - - if (logger::instance().should_log(instance->m_context, client_log_level::log_level_informational)) - { - utility::string_t str; - str.reserve(128); - str.append(_XPLATSTR("Response received. Status code = ")).append(utility::conversions::print_string(response.status_code())).append(_XPLATSTR(". Reason = ")).append(response.reason_phrase()); - logger::instance().log(instance->m_context, client_log_level::log_level_informational, str); - } - - try - { - // Let the user know we received response - auto response_received = instance->m_context._get_impl()->response_received(); - if (response_received) - { - response_received(instance->m_request, response, instance->m_context); - } - - // 7. Do Response parsing (headers etc, no stream available here) - // This is when the status code will be checked and m_preprocess_response - // will throw a storage_exception if it is not expected. - instance->m_request_result = request_result(instance->m_start_time, instance->m_current_location, response, false); - instance->m_command->preprocess_response(response, instance->m_request_result, instance->m_context); - - if (logger::instance().should_log(instance->m_context, client_log_level::log_level_informational)) - { - logger::instance().log(instance->m_context, client_log_level::log_level_informational, _XPLATSTR("Successful request ID = ") + instance->m_request_result.service_request_id()); - } - - // 8. Potentially download data - return response.content_ready(); - } - catch (const storage_exception& e) - { - // If the exception already contains an error message, the issue is not with - // the response, so rethrowing is the right thing. - if (e.what() != NULL && e.what()[0] != '\0') - { - throw; - } - - // Otherwise, response body might contain an error coming from the Storage service. - - return response.content_ready().then([instance](pplx::task get_error_body_task) -> web::http::http_response - { - auto response = get_error_body_task.get(); - - if (!instance->m_command->m_destination_stream) - { - // However, if the command has a destination stream, there is no guarantee that it - // is seekable and thus it cannot be read back to parse the error. - instance->m_request_result = request_result(instance->m_start_time, instance->m_current_location, response, true); - } - else - { - // Command has a destination stream. In this case, error information - // contained in response body might have been written into the destination - // stream. Need recreate the hash_provider since a retry might be needed. - instance->m_is_hashing_started = false; - } - - if (logger::instance().should_log(instance->m_context, client_log_level::log_level_warning)) - { - logger::instance().log(instance->m_context, client_log_level::log_level_warning, _XPLATSTR("Failed request ID = ") + instance->m_request_result.service_request_id()); - } - - throw storage_exception(utility::conversions::to_utf8string(response.reason_phrase())); - }); - } - }).then([instance](pplx::task get_body_task) -> pplx::task - { - // 9. Evaluate response & parse results - auto response = get_body_task.get(); - - if (instance->m_command->m_destination_stream) - { - utility::size64_t current_total_downloaded = instance->m_response_streambuf.total_written(); - utility::size64_t content_length = instance->m_request_result.content_length(); - if (content_length != -1 && current_total_downloaded != content_length) - { - // The download was interrupted before it could complete - throw storage_exception(protocol::error_incorrect_length); - } - } - - // It is now time to call m_postprocess_response - // Finish the MD5 hash if MD5 was being calculated - instance->m_hash_provider.close(); - instance->m_is_hashing_started = false; - - ostream_descriptor descriptor; - if (instance->m_response_streambuf) - { - utility::size64_t total_downloaded = instance->m_total_downloaded + instance->m_response_streambuf.total_written(); - descriptor = ostream_descriptor(total_downloaded, instance->m_hash_provider.hash()); - } - - return instance->m_command->postprocess_response(response, instance->m_request_result, descriptor, instance->m_context).then([instance](pplx::task result_task) - { - try - { - result_task.get(); - } - catch (const storage_exception& e) - { - if (e.result().is_response_available()) - { - instance->m_request_result.set_http_status_code(e.result().http_status_code()); - instance->m_request_result.set_extended_error(e.result().extended_error()); - } - - throw; - } - - }); - }).then([instance](pplx::task final_task) -> pplx::task - { - bool retryable_exception = true; - instance->m_context._get_impl()->add_request_result(instance->m_request_result); - - try - { - try - { - final_task.wait(); - } - catch (const storage_exception& e) - { - retryable_exception = e.retryable(); - throw; - } - } - catch (const std::exception& e) - { - // - // exception thrown by previous steps are handled here below - // - - if (logger::instance().should_log(instance->m_context, client_log_level::log_level_warning)) - { - logger::instance().log(instance->m_context, client_log_level::log_level_warning, _XPLATSTR("Exception thrown while processing response: ") + utility::conversions::to_string_t(e.what())); - } - - if (!retryable_exception) - { - if (logger::instance().should_log(instance->m_context, client_log_level::log_level_error)) - { - logger::instance().log(instance->m_context, client_log_level::log_level_error, _XPLATSTR("Exception was not retryable: ") + utility::conversions::to_string_t(e.what())); - } - - throw storage_exception(e.what(), instance->m_request_result, capture_inner_exception(e), false); - } - - // An exception occured and thus the request might be retried. Ask the retry policy. - retry_context context(instance->m_retry_count++, instance->m_request_result, instance->get_next_location(), instance->m_current_location_mode); - retry_info retry(instance->m_retry_policy.evaluate(context, instance->m_context)); - if (!retry.should_retry()) - { - if (logger::instance().should_log(instance->m_context, client_log_level::log_level_error)) - { - logger::instance().log(instance->m_context, client_log_level::log_level_error, _XPLATSTR("Retry policy did not allow for a retry, so throwing exception: ") + utility::conversions::to_string_t(e.what())); - } - - throw storage_exception(e.what(), instance->m_request_result, capture_inner_exception(e), false); - } - - instance->m_current_location = retry.target_location(); - instance->m_current_location_mode = retry.updated_location_mode(); - - if (instance->m_response_streambuf) - { - instance->m_total_downloaded += instance->m_response_streambuf.total_written(); - } - - // Try to recover the request. If it cannot be recovered, it cannot be retried - // even if the retry policy allowed for a retry. - if (instance->m_command->m_recover_request && - !instance->m_command->m_recover_request(instance->m_total_downloaded, instance->m_context)) - { - if (logger::instance().should_log(instance->m_context, client_log_level::log_level_error)) - { - logger::instance().log(instance->m_context, client_log_level::log_level_error, _XPLATSTR("Cannot recover request for retry, so throwing exception: ") + utility::conversions::to_string_t(e.what())); - } - - throw storage_exception(e.what(), instance->m_request_result, capture_inner_exception(e), false); - } - - if (logger::instance().should_log(instance->m_context, client_log_level::log_level_informational)) - { - utility::string_t str; - str.reserve(128); - str.append(_XPLATSTR("Retrying failed operation, number of retries: ")).append(utility::conversions::print_string(instance->m_retry_count)); - logger::instance().log(instance->m_context, client_log_level::log_level_informational, str); - } - - return complete_after(retry.retry_interval()).then([]() -> bool - { - // Returning true here will tell the outer do_while loop to loop one more time. - return true; - }); - } - - // Returning false here will cause do_while to exit. - return pplx::task_from_result(false); - }); - }).then([instance](pplx::task loop_task) - { - instance->m_context.set_end_time(utility::datetime::utc_now()); - loop_task.wait(); - - if (logger::instance().should_log(instance->m_context, client_log_level::log_level_informational)) - { - logger::instance().log(instance->m_context, client_log_level::log_level_informational, _XPLATSTR("Operation completed successfully")); - } - }); - } - + WASTORAGE_API static pplx::task execute_async(std::shared_ptr command, const request_options& options, operation_context context); + private: - std::chrono::seconds remaining_time() const + std::chrono::milliseconds remaining_time() const { - if (m_request_options.operation_expiry_time().is_initialized()) + if (m_request_options.operation_expiry_time().time_since_epoch().count()) { - auto now = utility::datetime::utc_now(); - if (m_request_options.operation_expiry_time().to_interval() > now.to_interval()) + auto duration = std::chrono::duration_cast(m_request_options.operation_expiry_time() - std::chrono::system_clock::now()); + if (duration.count() > 0) { - return std::chrono::seconds(m_request_options.operation_expiry_time() - now); + return duration; } else { @@ -695,7 +426,17 @@ namespace azure { namespace storage { namespace core { } } - return std::chrono::seconds(); + return std::chrono::milliseconds(); + } + + void assert_canceled() const + { + //Throw timeout if timeout is the reason of canceling. + core::assert_timed_out_by_timer(m_command->m_timer_handler); + if (m_command->is_canceled()) + { + throw storage_exception(protocol::error_operation_canceled); + } } static storage_location get_first_location(location_mode mode) @@ -823,6 +564,7 @@ namespace azure { namespace storage { namespace core { web::http::http_request m_request; request_result m_request_result; bool m_is_hashing_started; + bool m_should_restart_hash_provider; hash_provider m_hash_provider; hash_wrapper_streambuf m_response_streambuf; utility::size64_t m_total_downloaded; @@ -857,3 +599,5 @@ namespace azure { namespace storage { namespace core { }; }}} // namespace azure::storage::core + +#pragma pop_macro("max") diff --git a/Microsoft.WindowsAzure.Storage/includes/wascore/filestream.h b/Microsoft.WindowsAzure.Storage/includes/wascore/filestream.h index d2386669..4182495f 100644 --- a/Microsoft.WindowsAzure.Storage/includes/wascore/filestream.h +++ b/Microsoft.WindowsAzure.Storage/includes/wascore/filestream.h @@ -32,8 +32,8 @@ namespace azure { namespace storage { namespace core { : m_file(file), m_file_length(length), m_condition(access_condition), m_options(options), m_context(context), m_semaphore(options.parallelism_factor()),m_current_file_offset(0) { - m_buffer_size = protocol::max_block_size; - m_next_buffer_size = protocol::max_block_size; + m_buffer_size = protocol::max_range_size; + m_next_buffer_size = protocol::max_range_size; if (m_options.use_transactional_md5()) { diff --git a/Microsoft.WindowsAzure.Storage/includes/wascore/hashing.h b/Microsoft.WindowsAzure.Storage/includes/wascore/hashing.h index af53e284..360f984a 100644 --- a/Microsoft.WindowsAzure.Storage/includes/wascore/hashing.h +++ b/Microsoft.WindowsAzure.Storage/includes/wascore/hashing.h @@ -21,15 +21,10 @@ #include "wascore/basic_types.h" #include "was/core.h" +#include "was/crc64.h" #ifdef _WIN32 -#ifndef WIN32_LEAN_AND_MEAN -#define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers -#endif -#define NOMINMAX -#include #include -#include #else #include #include @@ -48,172 +43,165 @@ namespace azure { namespace storage { namespace core { virtual bool is_enabled() const = 0; virtual void write(const uint8_t* data, size_t count) = 0; virtual void close() = 0; - virtual utility::string_t hash() const = 0; + virtual checksum hash() const = 0; }; -#ifdef _WIN32 - - class cryptography_hash_algorithm + class null_hash_provider_impl : public hash_provider_impl { public: - ~cryptography_hash_algorithm(); - - operator BCRYPT_ALG_HANDLE() const + ~null_hash_provider_impl() override { - return m_algorithm_handle; } - protected: - cryptography_hash_algorithm(LPCWSTR algorithm_id, ULONG flags); - - private: - BCRYPT_ALG_HANDLE m_algorithm_handle; - }; - - class hmac_sha256_hash_algorithm : public cryptography_hash_algorithm - { - public: - static const hmac_sha256_hash_algorithm& instance() + bool is_enabled() const override { - return m_instance; + return false; } - private: - hmac_sha256_hash_algorithm() - : cryptography_hash_algorithm(BCRYPT_SHA256_ALGORITHM, BCRYPT_ALG_HANDLE_HMAC_FLAG) + void write(const uint8_t* data, size_t count) override { + // no-op + UNREFERENCED_PARAMETER(data); + UNREFERENCED_PARAMETER(count); } - static hmac_sha256_hash_algorithm m_instance; - }; - - class md5_hash_algorithm : public cryptography_hash_algorithm - { - public: - static const md5_hash_algorithm& instance() + void close() override { - return m_instance; + // no-op } - private: - md5_hash_algorithm() - : cryptography_hash_algorithm(BCRYPT_MD5_ALGORITHM, 0) + checksum hash() const override { + return checksum(checksum_none); } - - static md5_hash_algorithm m_instance; }; class cryptography_hash_provider_impl : public hash_provider_impl { public: - cryptography_hash_provider_impl(const cryptography_hash_algorithm& algorithm, const std::vector& key); +#ifdef _WIN32 + cryptography_hash_provider_impl(BCRYPT_HANDLE algorithm_handle, const std::vector& key); ~cryptography_hash_provider_impl() override; - bool is_enabled() const override - { - return true; - } - void write(const uint8_t* data, size_t count) override; void close() override; +#endif - utility::string_t hash() const override - { - return utility::conversions::to_base64(m_hash); - } + protected: + std::vector m_hash; +#ifdef _WIN32 private: std::vector m_hash_object; BCRYPT_HASH_HANDLE m_hash_handle; - std::vector m_hash; +#endif }; class hmac_sha256_hash_provider_impl : public cryptography_hash_provider_impl { public: - hmac_sha256_hash_provider_impl(const std::vector& key); - }; - - class md5_hash_provider_impl : public cryptography_hash_provider_impl - { - public: - md5_hash_provider_impl(); - }; - -#else // Linux - - class cryptography_hash_provider_impl : public hash_provider_impl - { - public: - ~cryptography_hash_provider_impl() override - { - } + explicit hmac_sha256_hash_provider_impl(const std::vector& key); + ~hmac_sha256_hash_provider_impl() override; bool is_enabled() const override { return true; } - utility::string_t hash() const override + void write(const uint8_t* data, size_t count) override; + void close() override; + + checksum hash() const override { - return utility::conversions::to_base64(m_hash); + return checksum(checksum_hmac_sha256, utility::conversions::to_base64(m_hash)); } - protected: - std::vector m_hash; + private: +#ifdef _WIN32 + static BCRYPT_ALG_HANDLE algorithm_handle(); +#else // Linux + HMAC_CTX* m_hash_context = nullptr; +#endif }; - class hmac_sha256_hash_provider_impl : public cryptography_hash_provider_impl + class md5_hash_provider_impl : public cryptography_hash_provider_impl { public: - hmac_sha256_hash_provider_impl(const std::vector& key); + md5_hash_provider_impl(); + ~md5_hash_provider_impl() override; + + bool is_enabled() const override + { + return true; + } void write(const uint8_t* data, size_t count) override; void close() override; + checksum hash() const override + { + return checksum(checksum_md5, utility::conversions::to_base64(m_hash)); + } + private: - HMAC_CTX m_hash_context; +#ifdef _WIN32 + static BCRYPT_ALG_HANDLE algorithm_handle(); +#else // Linux + MD5_CTX* m_hash_context = nullptr; +#endif }; - class md5_hash_provider_impl : public cryptography_hash_provider_impl + class sha256_hash_provider_impl : public cryptography_hash_provider_impl { public: - md5_hash_provider_impl(); + sha256_hash_provider_impl(); + ~sha256_hash_provider_impl() override; + + bool is_enabled() const override + { + return true; + } void write(const uint8_t* data, size_t count) override; void close() override; - private: - MD5_CTX m_hash_context; - }; + checksum hash() const override + { + return checksum(checksum_sha256, utility::conversions::to_base64(m_hash)); + } + private: +#ifdef _WIN32 + static BCRYPT_ALG_HANDLE algorithm_handle(); +#else // Linux + SHA256_CTX* m_hash_context = nullptr; #endif + }; - class null_hash_provider_impl : public hash_provider_impl + class crc64_hash_provider_impl : public hash_provider_impl { public: bool is_enabled() const override { - return false; + return true; } void write(const uint8_t* data, size_t count) override { - // no-op - UNREFERENCED_PARAMETER(data); - UNREFERENCED_PARAMETER(count); + m_crc = update_crc64(data, count, m_crc); } void close() override { - // no-op } - utility::string_t hash() const override + checksum hash() const override { - return utility::string_t(); + return checksum(checksum_crc64, m_crc); } + + private: + uint64_t m_crc = INITIAL_CRC64; }; class hash_provider @@ -239,7 +227,7 @@ namespace azure { namespace storage { namespace core { m_implementation->close(); } - utility::string_t hash() const + checksum hash() const { return m_implementation->hash(); } @@ -254,6 +242,16 @@ namespace azure { namespace storage { namespace core { return hash_provider(std::make_shared()); } + static hash_provider create_sha256_hash_provider() + { + return hash_provider(std::make_shared()); + } + + static hash_provider create_crc64_hash_provider() + { + return hash_provider(std::make_shared()); + } + private: explicit hash_provider(std::shared_ptr implementation) : m_implementation(implementation) diff --git a/Microsoft.WindowsAzure.Storage/includes/wascore/protocol.h b/Microsoft.WindowsAzure.Storage/includes/wascore/protocol.h index f08382ed..e1a8ce8d 100644 --- a/Microsoft.WindowsAzure.Storage/includes/wascore/protocol.h +++ b/Microsoft.WindowsAzure.Storage/includes/wascore/protocol.h @@ -32,6 +32,7 @@ namespace azure { namespace storage { namespace protocol { web::http::http_request get_service_properties(web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request set_service_properties(web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request get_service_stats(web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request get_account_properties(web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); void add_optional_header(web::http::http_headers& headers, const utility::string_t& header, const utility::string_t& value); void add_metadata(web::http::http_request& request, const cloud_metadata& metadata); @@ -47,26 +48,29 @@ namespace azure { namespace storage { namespace protocol { web::http::http_request list_blobs(const utility::string_t& prefix, const utility::string_t& delimiter, blob_listing_details::values includes, int max_results, const continuation_token& token, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request lease_blob_container(const utility::string_t& lease_action, const utility::string_t& proposed_lease_id, const lease_time& duration, const lease_break_period& break_period, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request lease_blob(const utility::string_t& lease_action, const utility::string_t& proposed_lease_id, const lease_time& duration, const lease_break_period& break_period, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request put_block(const utility::string_t& block_id, const utility::string_t& content_md5, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request put_block_list(const cloud_blob_properties& properties, const cloud_metadata& metadata, const utility::string_t& content_md5, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request put_block(const utility::string_t& block_id, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request put_block_list(const cloud_blob_properties& properties, const cloud_metadata& metadata, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request get_block_list(block_listing_filter listing_filter, const utility::string_t& snapshot_time, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request get_page_ranges(utility::size64_t offset, utility::size64_t length, const utility::string_t& snapshot_time, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request get_page_ranges_diff(utility::string_t previous_snapshort_time, utility::size64_t offset, utility::size64_t length, const utility::string_t& snapshot_time, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request put_page(page_range range, page_write write, const utility::string_t& content_md5, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request append_block(const utility::string_t& content_md5, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request put_block_blob(const cloud_blob_properties& properties, const cloud_metadata& metadata, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request put_page_blob(utility::size64_t size, int64_t sequence_number, const cloud_blob_properties& properties, const cloud_metadata& metadata, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request put_append_blob(const cloud_blob_properties& properties, const cloud_metadata& metadata, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request get_blob(utility::size64_t offset, utility::size64_t length, bool get_range_content_md5, const utility::string_t& snapshot_time, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request get_blob_properties(const utility::string_t& snapshot_time, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request get_page_ranges_diff(const utility::string_t& previous_snapshot_time, const utility::string_t& previous_snapshot_url, utility::size64_t offset, utility::size64_t length, const utility::string_t& snapshot_time, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request put_page(page_range range, page_write write, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request append_block(const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request put_block_blob(const checksum& content_checksum, const cloud_blob_properties& properties, const cloud_metadata& metadata, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request put_page_blob(utility::size64_t size, const utility::string_t& tier, int64_t sequence_number, const cloud_blob_properties& properties, const cloud_metadata& metadata, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request put_append_blob(const cloud_blob_properties& properties, const cloud_metadata& metadata, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request get_blob(utility::size64_t offset, utility::size64_t length, checksum_type needs_checksum, const utility::string_t& snapshot_time, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request get_blob_properties(const utility::string_t& snapshot_time, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request set_blob_properties(const cloud_blob_properties& properties, const cloud_metadata& metadata, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request resize_page_blob(utility::size64_t size, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request set_page_blob_sequence_number(const azure::storage::sequence_number& sequence_number, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request snapshot_blob(const cloud_metadata& metadata, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request set_blob_metadata(const cloud_metadata& metadata, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request snapshot_blob(const cloud_metadata& metadata, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request set_blob_metadata(const cloud_metadata& metadata, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request delete_blob(delete_snapshots_option snapshots_option, const utility::string_t& snapshot_time, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request copy_blob(const web::http::uri& source, const access_condition& source_condition, const cloud_metadata& metadata, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request copy_blob(const web::http::uri& source, const utility::string_t& tier, const access_condition& source_condition, const cloud_metadata& metadata, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request abort_copy_blob(const utility::string_t& copy_id, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request incremental_copy_blob(const web::http::uri& source, const access_condition& condition, const cloud_metadata& metadata, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request set_blob_tier(const utility::string_t& tier, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request get_user_delegation_key(web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); void add_lease_id(web::http::http_request& request, const access_condition& condition); void add_sequence_number_condition(web::http::http_request& request, const access_condition& condition); void add_access_condition(web::http::http_request& request, const access_condition& condition); @@ -82,7 +86,7 @@ namespace azure { namespace storage { namespace protocol { storage_uri generate_table_uri(const cloud_table_client& service_client, const cloud_table& table, const table_query& query, const continuation_token& token); web::http::http_request execute_table_operation(const cloud_table& table, table_operation_type operation_type, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request execute_operation(const table_operation& operation, table_payload_format payload_format, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request execute_batch_operation(Concurrency::streams::stringstreambuf& response_buffer, const cloud_table& table, const table_batch_operation& batch_operation, table_payload_format payload_format, bool is_query, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request execute_batch_operation(const cloud_table& table, const table_batch_operation& batch_operation, table_payload_format payload_format, bool is_query, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request execute_query(table_payload_format payload_format, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request get_table_acl(web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request set_table_acl(web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); @@ -118,23 +122,29 @@ namespace azure { namespace storage { namespace protocol { web::http::http_request get_file_share_stats(web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request get_file_share_acl(web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request set_file_share_acl(web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request list_files_and_directories(int64_t max_results, const continuation_token& token, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request create_file_directory(const cloud_metadata& metadata, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request get_file_share_permission(const utility::string_t& permission_key, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request set_file_share_permission(web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request list_files_and_directories(const utility::string_t& prefix, int64_t max_results, const continuation_token& token, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request create_file_directory(const cloud_metadata& metadata, const cloud_file_directory_properties& properties, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request delete_file_directory(web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request get_file_directory_properties(web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request set_file_directory_properties(const cloud_file_directory_properties& properties, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); web::http::http_request set_file_directory_metadata(const cloud_metadata& metadata, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request create_file(const int64_t length, const cloud_metadata& metadata, const cloud_file_properties& properties, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request delete_file(web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request get_file_properties(web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request set_file_properties(const cloud_file_properties& properties, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request set_file_metadata(const cloud_metadata& metadata, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request copy_file(const web::http::uri& source, const cloud_metadata& metadata, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request copy_file_from_blob(const web::http::uri& source, const access_condition& condition, const cloud_metadata& metadata, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request abort_copy_file(const utility::string_t& copy_id, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request list_file_ranges(utility::size64_t start_offset, utility::size64_t length, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request put_file_range(file_range range, file_range_write write, utility::string_t content_md5, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); - web::http::http_request get_file(utility::size64_t start_offset, utility::size64_t length, bool md5_validation, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); - + web::http::http_request create_file(const int64_t length, const cloud_metadata& metadata, const cloud_file_properties& properties, const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request delete_file(const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request get_file_properties(const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request set_file_properties(const cloud_file_properties& properties, const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request resize_with_properties(const cloud_file_properties& properties, const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request set_file_metadata(const cloud_metadata& metadata, const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request copy_file(const web::http::uri& source, const cloud_metadata& metadata, const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request copy_file_from_blob(const web::http::uri& source, const access_condition& condition, const cloud_metadata& metadata, const file_access_condition& file_condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request abort_copy_file(const utility::string_t& copy_id, const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request list_file_ranges(utility::size64_t start_offset, utility::size64_t length, const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request put_file_range(file_range range, file_range_write write, utility::string_t content_md5, const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request get_file(utility::size64_t start_offset, utility::size64_t length, bool md5_validation, const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context); + web::http::http_request lease_file(const utility::string_t& lease_action, const utility::string_t& proposed_lease_id, const file_access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context); + void add_access_condition(web::http::http_request& request, const file_access_condition& condition); + // Common response parsers template @@ -160,15 +170,18 @@ namespace azure { namespace storage { namespace protocol { void preprocess_response_void(const web::http::http_response& response, const request_result& result, operation_context context); - utility::datetime parse_last_modified(const utility::string_t& value); + utility::datetime parse_datetime_rfc1123(const utility::string_t& value); + utility::datetime parse_datetime_iso8601(const utility::string_t& value); utility::string_t parse_lease_id(const utility::string_t& value); lease_status parse_lease_status(const utility::string_t& value); lease_state parse_lease_state(const utility::string_t& value); lease_duration parse_lease_duration(const utility::string_t& value); std::chrono::seconds parse_lease_time(const utility::string_t& value); + cloud_file_attributes parse_file_attributes(const utility::string_t& value); int parse_approximate_messages_count(const web::http::http_response& response); utility::string_t parse_pop_receipt(const web::http::http_response& response); utility::datetime parse_next_visible_time(const web::http::http_response& response); + blob_container_public_access_type parse_public_access_type(const utility::string_t& value); utility::string_t get_header_value(const web::http::http_response& response, const utility::string_t& header); utility::string_t get_header_value(const web::http::http_headers& headers, const utility::string_t& header); @@ -183,25 +196,32 @@ namespace azure { namespace storage { namespace protocol { std::chrono::seconds parse_lease_time(const web::http::http_response& response); cloud_metadata parse_metadata(const web::http::http_response& response); storage_extended_error parse_extended_error(const web::http::http_response& response); + blob_container_public_access_type parse_public_access_type(const web::http::http_response& response); class response_parsers { public: static copy_state parse_copy_state(const web::http::http_response& response); - static utility::datetime parse_copy_completion_time(const utility::string_t& value); static bool parse_copy_progress(const utility::string_t& value, int64_t& bytes_copied, int64_t& bytes_total); static copy_status parse_copy_status(const utility::string_t& value); + static bool parse_boolean(const utility::string_t& value); + static utility::datetime parse_datetime(const utility::string_t& value, utility::datetime::date_format format = utility::datetime::date_format::RFC_1123); + static standard_blob_tier parse_standard_blob_tier(const utility::string_t& value); + static premium_blob_tier parse_premium_blob_tier(const utility::string_t& value); }; class blob_response_parsers { public: static blob_type parse_blob_type(const utility::string_t& value); + static standard_blob_tier parse_standard_blob_tier(const utility::string_t & value); + static premium_blob_tier parse_premium_blob_tier(const utility::string_t & value); static utility::size64_t parse_blob_size(const web::http::http_response& response); - static blob_container_public_access_type parse_public_access_type(const web::http::http_response& response); + static archive_status parse_archive_status(const utility::string_t& value); static cloud_blob_container_properties parse_blob_container_properties(const web::http::http_response& response); static cloud_blob_properties parse_blob_properties(const web::http::http_response& response); + static account_properties parse_account_properties(const web::http::http_response& response); }; class table_response_parsers @@ -209,7 +229,7 @@ namespace azure { namespace storage { namespace protocol { public: static utility::string_t parse_etag(const web::http::http_response& response); static continuation_token parse_continuation_token(const web::http::http_response& response, const request_result& result); - static std::vector parse_batch_results(const web::http::http_response& response, Concurrency::streams::stringstreambuf& response_buffer, bool is_query, size_t batch_size); + static std::vector parse_batch_results(const web::http::http_response& response, const concurrency::streams::container_buffer>& response_buffer, bool is_query, size_t batch_size); static std::vector parse_query_results(const web::json::value& obj); }; diff --git a/Microsoft.WindowsAzure.Storage/includes/wascore/protocol_json.h b/Microsoft.WindowsAzure.Storage/includes/wascore/protocol_json.h index cd22fb07..55dc103c 100644 --- a/Microsoft.WindowsAzure.Storage/includes/wascore/protocol_json.h +++ b/Microsoft.WindowsAzure.Storage/includes/wascore/protocol_json.h @@ -24,5 +24,7 @@ namespace azure { namespace storage { namespace protocol { table_entity parse_table_entity(const web::json::value& document); storage_extended_error parse_table_error(const web::json::value& document); + utility::string_t parse_file_permission(const web::json::value& document); + utility::string_t construct_file_permission(const utility::string_t& value); }}} // namespace azure::storage::protocol diff --git a/Microsoft.WindowsAzure.Storage/includes/wascore/protocol_xml.h b/Microsoft.WindowsAzure.Storage/includes/wascore/protocol_xml.h index 6657f9f2..12a7303a 100644 --- a/Microsoft.WindowsAzure.Storage/includes/wascore/protocol_xml.h +++ b/Microsoft.WindowsAzure.Storage/includes/wascore/protocol_xml.h @@ -21,8 +21,12 @@ #include "was/blob.h" #include "was/queue.h" #include "was/file.h" +#include "wascore/protocol.h" #include "wascore/xmlhelpers.h" +#pragma push_macro("max") +#undef max + namespace azure { namespace storage { namespace protocol { class storage_error_reader : public core::xml::xml_reader @@ -109,13 +113,21 @@ namespace azure { namespace storage { namespace protocol { std::vector move_items() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } return std::move(m_items); } utility::string_t move_next_marker() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } return std::move(m_next_marker); } @@ -139,8 +151,8 @@ namespace azure { namespace storage { namespace protocol { { public: - cloud_blob_list_item(web::http::uri uri, utility::string_t name, utility::string_t snapshot_time, cloud_metadata metadata, cloud_blob_properties properties, copy_state copy_state) - : m_uri(std::move(uri)), m_name(std::move(name)), m_snapshot_time(std::move(snapshot_time)), m_metadata(std::move(metadata)), m_properties(std::move(properties)), m_copy_state(std::move(copy_state)) + cloud_blob_list_item(web::http::uri uri, utility::string_t name, utility::string_t snapshot_time, bool is_current_version, cloud_metadata metadata, cloud_blob_properties properties, copy_state copy_state) + : m_uri(std::move(uri)), m_name(std::move(name)), m_snapshot_time(std::move(snapshot_time)), m_is_current_version(is_current_version), m_metadata(std::move(metadata)), m_properties(std::move(properties)), m_copy_state(std::move(copy_state)) { } @@ -159,6 +171,11 @@ namespace azure { namespace storage { namespace protocol { return std::move(m_snapshot_time); } + bool is_current_version() const + { + return m_is_current_version; + } + cloud_metadata move_metadata() { return std::move(m_metadata); @@ -179,6 +196,7 @@ namespace azure { namespace storage { namespace protocol { web::http::uri m_uri; utility::string_t m_name; utility::string_t m_snapshot_time; + bool m_is_current_version; cloud_metadata m_metadata; cloud_blob_properties m_properties; azure::storage::copy_state m_copy_state; @@ -220,19 +238,31 @@ namespace azure { namespace storage { namespace protocol { std::vector move_blob_items() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } return std::move(m_blob_items); } std::vector move_blob_prefix_items() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } return std::move(m_blob_prefix_items); } utility::string_t move_next_marker() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } return std::move(m_next_marker); } @@ -250,6 +280,7 @@ namespace azure { namespace storage { namespace protocol { utility::string_t m_name; web::http::uri m_uri; utility::string_t m_snapshot_time; + bool m_is_current_version = false; cloud_metadata m_metadata; cloud_blob_properties m_properties; copy_state m_copy_state; @@ -267,7 +298,11 @@ namespace azure { namespace storage { namespace protocol { // Extracts the result. This method can only be called once on this reader std::vector move_result() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } return std::move(m_page_list); } @@ -293,7 +328,11 @@ namespace azure { namespace storage { namespace protocol { // Extracts the result. This method can only be called once on this reader std::vector move_result() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } return std::move(m_page_list); } @@ -319,7 +358,11 @@ namespace azure { namespace storage { namespace protocol { // Extracts the result. This method can only be called once on this reader std::vector move_result() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } return std::move(m_block_list); } @@ -361,7 +404,11 @@ namespace azure { namespace storage { namespace protocol { shared_access_policies move_policies() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } return std::move(m_policies); } @@ -430,12 +477,12 @@ namespace azure { namespace storage { namespace protocol { if (policy.start().is_initialized()) { - write_element(xml_access_policy_start, core::convert_to_string_with_fixed_length_fractional_seconds(policy.start())); + write_element(xml_access_policy_start, core::convert_to_iso8601_string(policy.start(), 7)); } if (policy.expiry().is_initialized()) { - write_element(xml_access_policy_expiry, core::convert_to_string_with_fixed_length_fractional_seconds(policy.expiry())); + write_element(xml_access_policy_expiry, core::convert_to_iso8601_string(policy.expiry(), 7)); } if (policy.permission() != 0) @@ -488,13 +535,21 @@ namespace azure { namespace storage { namespace protocol { std::vector move_items() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } return std::move(m_items); } utility::string_t move_next_marker() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } return std::move(m_next_marker); } @@ -576,7 +631,12 @@ namespace azure { namespace storage { namespace protocol { std::vector move_items() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + // This is not a retryable exception because Get operation for messages changes server content. + throw storage_exception(protocol::error_xml_not_complete, false); + } return std::move(m_items); } @@ -649,13 +709,17 @@ namespace azure { namespace storage { namespace protocol { public: explicit get_share_stats_reader(concurrency::streams::istream stream) - : xml_reader(stream), m_quota(maximum_share_quota) + : xml_reader(stream), m_quota(std::numeric_limits::max()) { } - int32_t get() + int64_t get() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } return m_quota; } @@ -665,7 +729,7 @@ namespace azure { namespace storage { namespace protocol { virtual void handle_element(const utility::string_t& element_name); virtual void handle_end_element(const utility::string_t& element_name); - int32_t m_quota; + int64_t m_quota; }; class list_shares_reader : public core::xml::xml_reader @@ -679,13 +743,21 @@ namespace azure { namespace storage { namespace protocol { std::vector move_items() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } return std::move(m_items); } utility::string_t move_next_marker() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } return std::move(m_next_marker); } @@ -710,19 +782,27 @@ namespace azure { namespace storage { namespace protocol { public: explicit list_files_and_directories_reader(concurrency::streams::istream stream) - : xml_reader(stream), m_is_file(false), m_size(0) + : xml_reader(stream), m_size(0) { } std::vector move_items() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } return std::move(m_items); } utility::string_t move_next_marker() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } return std::move(m_next_marker); } @@ -736,11 +816,12 @@ namespace azure { namespace storage { namespace protocol { utility::string_t m_next_marker; utility::string_t m_share_name; utility::string_t m_directory_path; + utility::string_t m_directory_file_id; web::http::uri m_service_uri; - bool m_is_file; utility::string_t m_name; int64_t m_size; + utility::string_t m_file_id; }; class list_file_ranges_reader : public core::xml::xml_reader @@ -755,7 +836,11 @@ namespace azure { namespace storage { namespace protocol { // Extracts the result. This method can only be called once on this reader std::vector move_result() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } return std::move(m_range_list); } @@ -780,7 +865,11 @@ namespace azure { namespace storage { namespace protocol { service_properties move_properties() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } return std::move(m_service_properties); } @@ -830,7 +919,11 @@ namespace azure { namespace storage { namespace protocol { service_stats move_stats() { - parse(); + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } return std::move(m_service_stats); } @@ -845,4 +938,40 @@ namespace azure { namespace storage { namespace protocol { void handle_geo_replication_status(const utility::string_t& element_name); }; + class user_delegation_key_time_writer : public core::xml::xml_writer + { + public: + + user_delegation_key_time_writer() + { + } + + std::string write(const utility::datetime& start, const utility::datetime& expiry); + }; + + class user_delegation_key_reader : public core::xml::xml_reader + { + public: + explicit user_delegation_key_reader(concurrency::streams::istream stream) : xml_reader(stream) + { + } + + user_delegation_key move_key() + { + auto result = parse(); + if (result == xml_reader::parse_result::xml_not_complete) + { + throw storage_exception(protocol::error_xml_not_complete, true); + } + return std::move(m_key); + } + + protected: + void handle_element(const utility::string_t& element_name) override; + + user_delegation_key m_key; + }; + }}} // namespace azure::storage::protocol + +#pragma pop_macro("max") diff --git a/Microsoft.WindowsAzure.Storage/includes/wascore/resources.h b/Microsoft.WindowsAzure.Storage/includes/wascore/resources.h index 32b88532..60418793 100644 --- a/Microsoft.WindowsAzure.Storage/includes/wascore/resources.h +++ b/Microsoft.WindowsAzure.Storage/includes/wascore/resources.h @@ -24,7 +24,7 @@ namespace azure { namespace storage { namespace protocol { #define _RESOURCES -#define DAT(a, b) extern const char* a; const size_t a ## _size{ sizeof(b) / sizeof(utility::char_t) - 1 }; +#define DAT(a, b) extern const char* a; const size_t a ## _size = sizeof(b) / sizeof(char) - 1; #include "wascore/constants.dat" #undef DAT #undef _RESOURCES diff --git a/Microsoft.WindowsAzure.Storage/includes/wascore/streambuf.h b/Microsoft.WindowsAzure.Storage/includes/wascore/streambuf.h index ceb2939d..17f69eda 100644 --- a/Microsoft.WindowsAzure.Storage/includes/wascore/streambuf.h +++ b/Microsoft.WindowsAzure.Storage/includes/wascore/streambuf.h @@ -236,21 +236,25 @@ namespace azure { namespace storage { namespace core { pplx::task _putc(char_type ch) { - ++m_total_written; - m_hash_provider.write(&ch, 1); - - return m_inner_streambuf.putc(ch); + return m_inner_streambuf.putc(ch).then([this, ch](int_type ch_written) -> int_type + { + ++m_total_written; + m_hash_provider.write(&ch, 1); + return ch_written; + }); } pplx::task _putn(const char_type* ptr, size_t count) { - m_total_written += count; - m_hash_provider.write(ptr, count); - - return m_inner_streambuf.putn_nocopy(ptr, count); + return m_inner_streambuf.putn_nocopy(ptr, count).then([this, ptr](size_t count) -> size_t + { + m_total_written += count; + m_hash_provider.write(ptr, count); + return count; + }); } - utility::string_t hash() const + checksum hash() const { return m_hash_provider.hash(); } diff --git a/Microsoft.WindowsAzure.Storage/includes/wascore/streams.h b/Microsoft.WindowsAzure.Storage/includes/wascore/streams.h index acc41273..9969699f 100644 --- a/Microsoft.WindowsAzure.Storage/includes/wascore/streams.h +++ b/Microsoft.WindowsAzure.Storage/includes/wascore/streams.h @@ -44,7 +44,7 @@ namespace azure { namespace storage { namespace core { return base->total_written(); } - utility::string_t hash() const + checksum hash() const { const basic_hash_wrapper_streambuf<_CharType>* base = static_cast*>(Concurrency::streams::streambuf<_CharType>::get_base().get()); return base->hash(); @@ -56,7 +56,7 @@ namespace azure { namespace storage { namespace core { public: basic_cloud_ostreambuf() : basic_ostreambuf(), - m_current_streambuf_offset(0), m_committed(false) + m_current_streambuf_offset(0), m_committed(false), m_buffer_size(0), m_next_buffer_size(0) { } @@ -143,10 +143,10 @@ namespace azure { namespace storage { namespace core { class buffer_to_upload { public: - buffer_to_upload(concurrency::streams::container_buffer> buffer, const utility::string_t& content_md5) + buffer_to_upload(concurrency::streams::container_buffer> buffer, const checksum& content_checksum) : m_size(buffer.size()), m_stream(concurrency::streams::container_stream>::open_istream(std::move(buffer.collection()))), - m_content_md5(content_md5) + m_content_checksum(content_checksum) { } @@ -165,9 +165,9 @@ namespace azure { namespace storage { namespace core { return m_size == 0; } - const utility::string_t& content_md5() const + const checksum& content_checksum() const { - return m_content_md5; + return m_content_checksum; } private: @@ -175,7 +175,7 @@ namespace azure { namespace storage { namespace core { // Note: m_size must be initialized before m_stream, and thus must be listed first in this list. // This is because we use std::move to initialize m_stream, but we need to get the size first. utility::size64_t m_size; - utility::string_t m_content_md5; + checksum m_content_checksum; concurrency::streams::istream m_stream; }; diff --git a/Microsoft.WindowsAzure.Storage/includes/wascore/timer_handler.h b/Microsoft.WindowsAzure.Storage/includes/wascore/timer_handler.h new file mode 100644 index 00000000..32bb96ce --- /dev/null +++ b/Microsoft.WindowsAzure.Storage/includes/wascore/timer_handler.h @@ -0,0 +1,87 @@ +// ----------------------------------------------------------------------------------------- +// +// Copyright 2018 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +// +// ----------------------------------------------------------------------------------------- + +#pragma once + +#include "cpprest/http_client.h" +#include + +#include "wascore/constants.h" + +#ifndef _WIN32 +#include +#include +#include "pplx/threadpool.h" +#else +#include +#endif + +namespace azure { namespace storage { namespace core { + /// + /// Used for internal logic of timer handling, including timer creation, deletion and cancellation + /// + class timer_handler : public std::enable_shared_from_this + { + public: + WASTORAGE_API explicit timer_handler(const pplx::cancellation_token& token); + + WASTORAGE_API ~timer_handler(); + + WASTORAGE_API void start_timer(const std::chrono::milliseconds& time); + + WASTORAGE_API void stop_timer(); + + bool timer_started() const + { + return m_timer_started.load(std::memory_order_acquire); + } + + pplx::cancellation_token get_cancellation_token() const + { + return m_worker_cancellation_token_source.get_token(); + } + + bool is_canceled() const + { + return m_worker_cancellation_token_source.get_token().is_canceled(); + } + + bool is_canceled_by_timeout() const + { + return m_is_canceled_by_timeout.load(std::memory_order_acquire); + } + + private: + pplx::cancellation_token_source m_worker_cancellation_token_source; + pplx::cancellation_token_registration m_cancellation_token_registration; + pplx::cancellation_token m_cancellation_token; + pplx::task m_timeout_task; + std::atomic m_is_canceled_by_timeout; + pplx::task_completion_event m_tce; + + std::mutex m_mutex; + + WASTORAGE_API pplx::task timeout_after(const std::chrono::milliseconds& time); + +#ifndef _WIN32 + std::shared_ptr> m_timer; +#else + std::shared_ptr> m_timer; +#endif + std::atomic m_timer_started; + }; +}}} diff --git a/Microsoft.WindowsAzure.Storage/includes/wascore/util.h b/Microsoft.WindowsAzure.Storage/includes/wascore/util.h index 0361ebe6..fcd86cd2 100644 --- a/Microsoft.WindowsAzure.Storage/includes/wascore/util.h +++ b/Microsoft.WindowsAzure.Storage/includes/wascore/util.h @@ -25,6 +25,10 @@ #include "cpprest/streams.h" #include "was/core.h" +#include "wascore/timer_handler.h" + +#pragma push_macro("max") +#undef max namespace azure { namespace storage { namespace core { @@ -61,19 +65,22 @@ namespace azure { namespace storage { namespace core { utility::string_t make_query_parameter(const utility::string_t& parameter_name, const utility::string_t& parameter_value, bool do_encoding = true); utility::size64_t get_remaining_stream_length(concurrency::streams::istream stream); - pplx::task stream_copy_async(concurrency::streams::istream istream, concurrency::streams::ostream ostream, utility::size64_t length, utility::size64_t max_length = std::numeric_limits::max()); + pplx::task stream_copy_async(concurrency::streams::istream istream, concurrency::streams::ostream ostream, utility::size64_t length, utility::size64_t max_length = std::numeric_limits::max(), const pplx::cancellation_token& cancellation_token = pplx::cancellation_token::none(), std::shared_ptr timer_handler = nullptr); pplx::task complete_after(std::chrono::milliseconds timeout); std::vector string_split(const utility::string_t& string, const utility::string_t& separator); bool is_empty_or_whitespace(const utility::string_t& value); + bool has_whitespace_or_empty(const utility::string_t& str); utility::string_t single_quote(const utility::string_t& value); bool is_nan(double value); bool is_finite(double value); bool is_integral(const utility::string_t& value); - utility::datetime truncate_fractional_seconds(utility::datetime value); utility::string_t convert_to_string(double value); + utility::string_t convert_to_string(const utility::string_t& source); utility::string_t convert_to_string(const std::vector& value); - utility::string_t convert_to_string_with_fixed_length_fractional_seconds(utility::datetime value); + utility::string_t convert_to_iso8601_string(const utility::datetime& value, int num_decimal_digits); utility::char_t utility_char_tolower(const utility::char_t& character); + utility::string_t str_trim_starting_trailing_whitespaces(const utility::string_t& str); + void assert_timed_out_by_timer(std::shared_ptr timer_handler); template utility::string_t convert_to_string(T value) @@ -125,3 +132,5 @@ namespace azure { namespace storage { namespace core { #endif }}} // namespace azure::storage::core + +#pragma pop_macro("max") diff --git a/Microsoft.WindowsAzure.Storage/includes/wascore/xml_wrapper.h b/Microsoft.WindowsAzure.Storage/includes/wascore/xml_wrapper.h new file mode 100644 index 00000000..18ca989a --- /dev/null +++ b/Microsoft.WindowsAzure.Storage/includes/wascore/xml_wrapper.h @@ -0,0 +1,189 @@ +/*** +* ==++== +* +* Copyright (c) Microsoft Corporation. All rights reserved. +* Licensed under the Apache License, Version 2.0 (the "License"); +* you may not use this file except in compliance with the License. +* You may obtain a copy of the License at +* http://www.apache.org/licenses/LICENSE-2.0 +* +* Unless required by applicable law or agreed to in writing, software +* distributed under the License is distributed on an "AS IS" BASIS, +* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +* See the License for the specific language governing permissions and +* limitations under the License. +* +* ==--== +* =+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ +* +* xml_wrapper.h +* +* This file contains wrapper for libxml2 +* +* =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- +****/ + +#pragma once +#ifndef _XML_WRAPPER_H +#define _XML_WRAPPER_H + +#ifndef _WIN32 +#include +#include +#include + +#include "wascore/basic_types.h" + +namespace azure { namespace storage { namespace core { namespace xml { + + std::string xml_char_to_string(const xmlChar *xml_char); + + /// + /// A class to wrap xmlTextReader of c library libxml2. This class provides abilities to read from xml format texts. + /// + class xml_text_reader_wrapper { + public: + xml_text_reader_wrapper(const unsigned char* buffer, unsigned int size); + + ~xml_text_reader_wrapper(); + + /// + /// Moves to the next node in the stream. + /// + /// true if the node was read successfully and false if there are no more nodes to read + bool read(); + + /// + /// Gets the type of the current node. + /// + /// A integer that represent the type of the node + unsigned get_node_type(); + + /// + /// Checks if current node is empty + /// + /// True if current node is empty and false if otherwise + bool is_empty_element(); + + /// + /// Gets the local name of the node + /// + /// A string indicates the local name of this node + std::string get_local_name(); + + /// + /// Gets the value of the node + /// + /// A string value of the node + std::string get_value(); + + /// + /// Moves to the first attribute of the node. + /// + /// True if the move is successful, false if empty + bool move_to_first_attribute(); + + /// + /// Moves to the next attribute of the node. + /// + /// True if the move is successful, false if empty + bool move_to_next_attribute(); + + private: + xmlTextReaderPtr m_reader; + }; + + /// + /// A class to wrap xmlNode of c library libxml2. This class provides abilities to create xml nodes. + /// + class xml_element_wrapper { + public: + xml_element_wrapper(); + + ~xml_element_wrapper(); + + xml_element_wrapper(xmlNode* node); + + /// + /// Adds a child element to this node. + /// + /// The name of the child node. + /// The namespace prefix of the child node. + /// The created child node. + xml_element_wrapper* add_child(const std::string& name, const std::string& prefix); + + /// + /// Adds a namespace declaration to the node. + /// + /// The namespace to associate with the prefix. + /// The namespace prefix + void set_namespace_declaration(const std::string& uri, const std::string& prefix); + + /// + /// Set the namespace prefix + /// + /// name space prefix to be set + void set_namespace(const std::string& prefix); + + /// + /// Sets the value of the attribute with this name (and prefix). + /// + /// The name of the attribute + /// The value of the attribute + /// The prefix of the attribute, this is optional. + void set_attribute(const std::string& name, const std::string& value, const std::string& prefix); + + /// + /// Sets the text of the first text node. If there isn't a text node, add one and set it. + /// + /// The text to be set to the child node. + void set_child_text(const std::string& text); + + /// + /// Frees the wrappers set in nod->_private + /// + /// The node to be freed. + /// A object that represents the current operation. + static void free_wrappers(xmlNode* node); + + private: + xmlNode* m_ele; + }; + + /// + /// A class to wrap xmlDoc of c library libxml2. This class provides abilities to create xml format texts from nodes. + /// + class xml_document_wrapper { + public: + xml_document_wrapper(); + + ~xml_document_wrapper(); + + /// + /// Converts object to a string object. + /// + /// A std::string that contains the result + std::string write_to_string(); + + /// + /// Creates the root node of the document. + /// + /// The name of the root node. + /// The namespace of the root node. + /// The namespace prefix of the root node. + /// A wrapper that contains the root node. + xml_element_wrapper* create_root_node(const std::string& name, const std::string& namespace_name, const std::string& prefix); + + /// + /// Gets the root node of the document. + /// + /// The root node of the document. + xml_element_wrapper* get_root_node() const; + + private: + xmlDocPtr m_doc; + }; +}}}} // namespace azure::storage::core::xml +#endif //#ifdef _WIN32 + +#endif //#ifndef _XML_WRAPPER_H diff --git a/Microsoft.WindowsAzure.Storage/includes/wascore/xmlhelpers.h b/Microsoft.WindowsAzure.Storage/includes/wascore/xmlhelpers.h index 0f2f64e4..74d98aeb 100644 --- a/Microsoft.WindowsAzure.Storage/includes/wascore/xmlhelpers.h +++ b/Microsoft.WindowsAzure.Storage/includes/wascore/xmlhelpers.h @@ -47,8 +47,7 @@ #include #include #else -#include -#include +#include "wascore/xml_wrapper.h" #include #endif @@ -65,13 +64,34 @@ class xml_reader { public: + /// + /// An enumeration describing result of the parse() operation. + /// + enum parse_result + { + /// + /// Parsed is finished and cannot be continued. + /// + cannot_continue = 0, + + /// + /// Parse is paused and can be continued. + /// + can_continue = 1, + + /// + /// Exited because XML is not complete. + /// + xml_not_complete = 2, + }; + virtual ~xml_reader() {} /// - /// Parse the given xml string/stream. Returns true if it finished parsing the stream to the end, and false - /// if it was asked to exit early via pause() + /// Parse the given xml string/stream. Return value indicates if + /// the parsing was successful. /// - bool parse(); + parse_result parse(); protected: @@ -168,7 +188,7 @@ class xml_reader #ifdef _WIN32 CComPtr m_reader; #else - std::shared_ptr m_reader; + std::shared_ptr m_reader; std::string m_data; #endif @@ -187,7 +207,12 @@ class xml_writer virtual ~xml_writer() {} protected: + +#ifdef _WIN32 xml_writer() +#else // LINUX + xml_writer() :m_stream(nullptr) +#endif { } @@ -270,8 +295,8 @@ class xml_writer #ifdef _WIN32 CComPtr m_writer; #else // LINUX - std::shared_ptr m_document; - std::stack m_elementStack; + std::shared_ptr m_document; + std::stack m_elementStack; std::ostream * m_stream; #endif }; diff --git a/Microsoft.WindowsAzure.Storage/packages.config b/Microsoft.WindowsAzure.Storage/packages.config deleted file mode 100644 index f08160b6..00000000 --- a/Microsoft.WindowsAzure.Storage/packages.config +++ /dev/null @@ -1,5 +0,0 @@ - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/Application.cpp b/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted.cpp similarity index 75% rename from Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/Application.cpp rename to Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted.cpp index 23b26c3b..16b02487 100644 --- a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/Application.cpp +++ b/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted.cpp @@ -1,5 +1,5 @@ // ----------------------------------------------------------------------------------------- -// +// // Copyright 2013 Microsoft Corporation // // Licensed under the Apache License, Version 2.0 (the "License"); @@ -15,21 +15,17 @@ // // ----------------------------------------------------------------------------------------- -#include "stdafx.h" #include "samples_common.h" -#include "was/storage_account.h" -#include "was/blob.h" -#include "cpprest/filestream.h" -#include "cpprest/containerstream.h" +#include +#include +#include +#include -namespace azure { namespace storage { namespace samples { - utility::string_t to_string(const std::vector& data) - { - return utility::string_t(data.cbegin(), data.cend()); - } +namespace azure { namespace storage { namespace samples { + SAMPLE(BlobsGettingStarted, blobs_getting_started_sample) void blobs_getting_started_sample() { try @@ -86,7 +82,7 @@ namespace azure { namespace storage { namespace samples { concurrency::streams::ostream output_stream(buffer); azure::storage::cloud_block_blob binary_blob = container.get_block_blob_reference(_XPLATSTR("my-blob-1")); binary_blob.download_to_stream(output_stream); - ucout << _XPLATSTR("Stream: ") << to_string(buffer.collection()) << std::endl; + ucout << _XPLATSTR("Stream: ") << utility::string_t(buffer.collection().begin(), buffer.collection().end()) << std::endl; // Download a blob as text azure::storage::cloud_block_blob text_blob = container.get_block_blob_reference(_XPLATSTR("my-blob-2")); @@ -114,6 +110,34 @@ namespace azure { namespace storage { namespace samples { // Download append blob as text utility::string_t append_text = append_blob.download_text(); ucout << _XPLATSTR("Append Text: ") << append_text << std::endl; + + // Cancellation token + pplx::cancellation_token_source source; // This is used to cancel the request. + auto download_text_task = append_blob.download_text_async(azure::storage::access_condition(), azure::storage::blob_request_options(), azure::storage::operation_context(), source.get_token()); + source.cancel();// This call will cancel download_text_task + try + { + auto downloaded_text = download_text_task.get(); + ucout << _XPLATSTR("Text downloaded successfully unexpectedly, text is: ") << downloaded_text << std::endl; + } + catch (const azure::storage::storage_exception& e) + { + ucout << _XPLATSTR("Operation should be cancelled, the error message is: ") << e.what() << std::endl; + } + + // Millisecond level timeout + azure::storage::blob_request_options options; + options.set_maximum_execution_time(std::chrono::milliseconds(1)); + try + { + download_text_task = append_blob.download_text_async(azure::storage::access_condition(), options, azure::storage::operation_context()); + auto downloaded_text = download_text_task.get(); + ucout << _XPLATSTR("Text downloaded successfully unexpectedly, text is: ") << downloaded_text << std::endl; + } + catch (const azure::storage::storage_exception& e) + { + ucout << _XPLATSTR("Operation should be timed-out, the error message is: ") << e.what() << std::endl; + } // Delete the blob append_blob.delete_blob(); @@ -139,11 +163,4 @@ namespace azure { namespace storage { namespace samples { } } -}}} // namespace azure::storage::samples - -int main(int argc, const char *argv[]) -{ - azure::storage::samples::blobs_getting_started_sample(); - return 0; -} - +}}} // namespace azure::storage::samples diff --git a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/CMakeLists.txt b/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/CMakeLists.txt deleted file mode 100644 index af472dde..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/CMakeLists.txt +++ /dev/null @@ -1,14 +0,0 @@ -include_directories(. ${AZURESTORAGESAMPLES_INCLUDE_DIRS}) - -# THE ORDER OF FILES IS VERY /VERY/ IMPORTANT -if(UNIX) - set(SOURCES - Application.cpp - stdafx.cpp - ) -endif() - -buildsample(${AZURESTORAGESAMPLES_BLOBS} ${SOURCES}) - -# Copy DataFile.txt to output directory -file(COPY DataFile.txt DESTINATION ${CMAKE_RUNTIME_OUTPUT_DIRECTORY}) diff --git a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/DataFile.txt b/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/DataFile.txt deleted file mode 100644 index 8fe2a4b5..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/DataFile.txt +++ /dev/null @@ -1 +0,0 @@ -The quick brown fox jumps over the lazy dog. \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/Microsoft.WindowsAzure.Storage.BlobsGettingStarted.v120.vcxproj b/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/Microsoft.WindowsAzure.Storage.BlobsGettingStarted.v120.vcxproj deleted file mode 100644 index a9e392db..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/Microsoft.WindowsAzure.Storage.BlobsGettingStarted.v120.vcxproj +++ /dev/null @@ -1,127 +0,0 @@ - - - - - Debug - Win32 - - - Release - Win32 - - - - {234BE88B-26A3-46DB-A199-C9AD595E8076} - MicrosoftWindowsAzureStorageBlobsGettingStarted - - - - Application - true - v120 - Unicode - - - Application - false - v120 - true - Unicode - - - - - - - - - - - - 3d4c57a7 - - - true - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - false - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - - Use - Level3 - Disabled - WIN32;_DEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - - - - - - Level3 - Use - MaxSpeed - true - true - WIN32;NDEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - true - true - - - - - - - - - Create - Create - - - - - {dcff75b0-b142-4ec8-992f-3e48f2e3eece} - true - true - false - true - false - - - {6412bfc8-d0f2-4a87-8c36-4efd77157859} - - - - - true - - - - - - - - - - - - This project references NuGet package(s) that are missing on this computer. Enable NuGet Package Restore to download them. For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}. - - - - diff --git a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/Microsoft.WindowsAzure.Storage.BlobsGettingStarted.v120.vcxproj.filters b/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/Microsoft.WindowsAzure.Storage.BlobsGettingStarted.v120.vcxproj.filters deleted file mode 100644 index 897e83fc..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/Microsoft.WindowsAzure.Storage.BlobsGettingStarted.v120.vcxproj.filters +++ /dev/null @@ -1,36 +0,0 @@ - - - - - {4FC737F1-C7A5-4376-A066-2A32D752A2FF} - cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx - - - {93995380-89BD-4b04-88EB-625FBE52EBFB} - h;hpp;hxx;hm;inl;inc;xsd - - - {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} - rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms - - - - - Header Files - - - - - Source Files - - - Source Files - - - - - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/Microsoft.WindowsAzure.Storage.BlobsGettingStarted.v140.vcxproj b/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/Microsoft.WindowsAzure.Storage.BlobsGettingStarted.v140.vcxproj deleted file mode 100644 index bd7c8306..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/Microsoft.WindowsAzure.Storage.BlobsGettingStarted.v140.vcxproj +++ /dev/null @@ -1,125 +0,0 @@ - - - - - Debug - Win32 - - - Release - Win32 - - - - {F2E0AF2E-8517-4E2B-85AA-701F7EBA6130} - MicrosoftWindowsAzureStorageBlobsGettingStarted - - - - Application - true - v140 - Unicode - - - Application - false - v140 - true - Unicode - - - - - - - - - - - - - true - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - false - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - - Use - Level3 - Disabled - WIN32;_DEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - - - - - - Level3 - Use - MaxSpeed - true - true - WIN32;NDEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - true - true - - - - - - - - - Create - Create - - - - - {25D342C3-6CDA-44DD-A16A-32A19B692785} - true - true - false - true - false - - - {2F5FA2E8-7F88-45EC-BDEF-8AF31B1F719C} - - - - - true - - - - - - - - - - - - This project references NuGet package(s) that are missing on this computer. Use NuGet Package Restore to download them. For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}. - - - - diff --git a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/Microsoft.WindowsAzure.Storage.BlobsGettingStarted.v140.vcxproj.filters b/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/Microsoft.WindowsAzure.Storage.BlobsGettingStarted.v140.vcxproj.filters deleted file mode 100644 index 897e83fc..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/Microsoft.WindowsAzure.Storage.BlobsGettingStarted.v140.vcxproj.filters +++ /dev/null @@ -1,36 +0,0 @@ - - - - - {4FC737F1-C7A5-4376-A066-2A32D752A2FF} - cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx - - - {93995380-89BD-4b04-88EB-625FBE52EBFB} - h;hpp;hxx;hm;inl;inc;xsd - - - {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} - rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms - - - - - Header Files - - - - - Source Files - - - Source Files - - - - - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/packages.config b/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/packages.config deleted file mode 100644 index ebd30628..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/packages.config +++ /dev/null @@ -1,5 +0,0 @@ - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/stdafx.cpp b/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/stdafx.cpp deleted file mode 100644 index 4c691102..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/stdafx.cpp +++ /dev/null @@ -1,23 +0,0 @@ -// ----------------------------------------------------------------------------------------- -// -// Copyright 2013 Microsoft Corporation -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. -// -// ----------------------------------------------------------------------------------------- - -// stdafx.cpp : source file that includes just the standard includes -// ConsoleApplication1.pch will be the pre-compiled header -// stdafx.obj will contain the pre-compiled type information - -#include "stdafx.h" - diff --git a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/stdafx.h b/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/stdafx.h deleted file mode 100644 index e52a6520..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted/stdafx.h +++ /dev/null @@ -1,28 +0,0 @@ -// ----------------------------------------------------------------------------------------- -// -// Copyright 2013 Microsoft Corporation -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. -// -// ----------------------------------------------------------------------------------------- - -// stdafx.h : include file for standard system include files, -// or project specific include files that are used frequently, but -// are changed infrequently -// - -#pragma once - -#include "targetver.h" - -#define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers - diff --git a/Microsoft.WindowsAzure.Storage/samples/BlobsPerformanceBenchmark.cpp b/Microsoft.WindowsAzure.Storage/samples/BlobsPerformanceBenchmark.cpp new file mode 100644 index 00000000..9665868c --- /dev/null +++ b/Microsoft.WindowsAzure.Storage/samples/BlobsPerformanceBenchmark.cpp @@ -0,0 +1,109 @@ +// ----------------------------------------------------------------------------------------- +// +// Copyright 2013 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +// +// ----------------------------------------------------------------------------------------- + +#include "samples_common.h" + +#include +#include +#include +#include + +#include +#include +#include +#include + +namespace azure { namespace storage { namespace samples { + + SAMPLE(BlobsPerformanceBenchmark, blobs_performance_benchmark) + void blobs_performance_benchmark() + { + try + { + azure::storage::cloud_storage_account storage_account = azure::storage::cloud_storage_account::parse(storage_connection_string); + + azure::storage::cloud_blob_client blob_client = storage_account.create_cloud_blob_client(); + azure::storage::cloud_blob_container container = blob_client.get_container_reference(_XPLATSTR("benchmark-container")); + + container.create_if_not_exists(); + + const uint64_t blob_size = 1024 * 1024 * 1024; + const int parallelism = 20; + + std::vector buffer; + buffer.resize(blob_size); + + std::mt19937_64 rand_engine(std::random_device{}()); + std::uniform_int_distribution dist; + std::generate(reinterpret_cast(buffer.data()), reinterpret_cast(buffer.data() + buffer.size()), [&dist, &rand_engine]() { return dist(rand_engine); }); + + azure::storage::blob_request_options options; + options.set_parallelism_factor(8); + options.set_use_transactional_crc64(false); + options.set_use_transactional_md5(false); + options.set_store_blob_content_md5(false); + options.set_stream_write_size_in_bytes(32 * 1024 * 1024); + + std::vector> tasks; + + auto start = std::chrono::steady_clock::now(); + for (int i = 0; i < parallelism; ++i) + { + auto blob = container.get_block_blob_reference(_XPLATSTR("blob") + utility::conversions::to_string_t(std::to_string(i))); + auto task = blob.upload_from_stream_async(concurrency::streams::container_buffer>(buffer).create_istream(), blob_size, azure::storage::access_condition(), options, azure::storage::operation_context()); + tasks.emplace_back(std::move(task)); + } + for (auto& t : tasks) + { + t.get(); + } + auto end = std::chrono::steady_clock::now(); + + double elapsed_s = double(std::chrono::duration_cast(end - start).count()) / 1000; + uint64_t data_mb = blob_size * parallelism / 1024 / 1024; + + std::cout << "Uploaded " << data_mb << "MB in " << elapsed_s << " seconds, throughput " << data_mb / elapsed_s << "MBps" << std::endl; + + tasks.clear(); + start = std::chrono::steady_clock::now(); + { + std::vector>> download_buffers; + for (int i = 0; i < parallelism; ++i) + { + auto blob = container.get_block_blob_reference(_XPLATSTR("blob") + utility::conversions::to_string_t(std::to_string(i))); + download_buffers.emplace_back(concurrency::streams::container_buffer>()); + auto task = blob.download_to_stream_async(download_buffers.back().create_ostream(), azure::storage::access_condition(), options, azure::storage::operation_context()); + tasks.emplace_back(std::move(task)); + } + for (auto& t : tasks) + { + t.get(); + } + end = std::chrono::steady_clock::now(); + } + + elapsed_s = double(std::chrono::duration_cast(end - start).count()) / 1000; + + std::cout << "Downloaded " << data_mb << "MB in " << elapsed_s << " seconds, throughput " << data_mb / elapsed_s << "MBps" << std::endl; + } + catch (const azure::storage::storage_exception& e) + { + std::cout << e.what() << std::endl; + } + } + +}}} // namespace azure::storage::samples diff --git a/Microsoft.WindowsAzure.Storage/samples/CMakeLists.txt b/Microsoft.WindowsAzure.Storage/samples/CMakeLists.txt index 45a7f278..c5154556 100644 --- a/Microsoft.WindowsAzure.Storage/samples/CMakeLists.txt +++ b/Microsoft.WindowsAzure.Storage/samples/CMakeLists.txt @@ -1,35 +1,21 @@ -# Reconfigure final output directory -set(AZURESTORAGESAMPLES_INCLUDE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/SamplesCommon) -set(AZURESTORAGESAMPLES_INCLUDE_DIRS ${CMAKE_CURRENT_SOURCE_DIR}/SamplesCommon ${AZURESTORAGE_INCLUDE_DIRS}) +if(UNIX) + set(SOURCES + BlobsGettingStarted.cpp + BlobsPerformanceBenchmark.cpp + FilesGettingStarted.cpp + JsonPayloadFormat.cpp + main.cpp + NativeClientLibraryDemo1.cpp + NativeClientLibraryDemo2.cpp + OAuthGettingStarted.cpp + QueuesGettingStarted.cpp + samples_common.h + TablesGettingStarted.cpp + FilesProperties.cpp + ) +endif() -set(AZURESTORAGESAMPLES_COMMON samplescommon) -set(AZURESTORAGESAMPLES_BLOBS samplesblobs) -set(AZURESTORAGESAMPLES_JSON samplesjson) -set(AZURESTORAGESAMPLES_TABLES samplestables) -set(AZURESTORAGESAMPLES_QUEUES samplesqueues) -set(AZURESTORAGESAMPLES_FILES samplesfiles) -set(AZURESTORAGESAMPLES_LIBRARIES ${AZURESTORAGESAMPLES_COMMON} ${AZURESTORAGE_LIBRARIES}) +add_executable(${AZUREAZURAGE_LIBRARY_SAMPLE} ${SOURCES}) -macro (buildsample SAMPLE_NAME SAMPLE_SOURCES) - - if("${SAMPLE_NAME}" MATCHES "${AZURESTORAGESAMPLES_COMMON}") - add_library(${SAMPLE_NAME} ${SAMPLE_SOURCES}) - target_link_libraries(${AZURESTORAGESAMPLES_LIBRARIES}) - else() - add_executable(${SAMPLE_NAME} ${SAMPLE_SOURCES}) - target_link_libraries(${SAMPLE_NAME} ${AZURESTORAGESAMPLES_LIBRARIES}) - endif() - - # Portions specific to cpprest binary versioning. - if(UNIX) - set_target_properties(${SAMPLE_NAME} PROPERTIES - SOVERSION ${AZURESTORAGE_VERSION_MAJOR}.${AZURESTORAGE_VERSION_MINOR}) - endif() -endmacro() - -add_subdirectory(SamplesCommon) -add_subdirectory(BlobsGettingStarted) -add_subdirectory(JsonPayloadFormat) -add_subdirectory(QueuesGettingStarted) -add_subdirectory(TablesGettingStarted) -add_subdirectory(FilesGettingStarted) +target_include_directories(${AZUREAZURAGE_LIBRARY_SAMPLE} PRIVATE ${AZURESTORAGE_INCLUDE_DIRS}) +target_link_libraries(${AZUREAZURAGE_LIBRARY_SAMPLE} ${AZURESTORAGE_LIBRARIES}) diff --git a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/NativeClientLibraryDemo1.cpp b/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/NativeClientLibraryDemo1.cpp deleted file mode 100644 index f59cc33a..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/NativeClientLibraryDemo1.cpp +++ /dev/null @@ -1,71 +0,0 @@ -// NativeClientLibraryDemo1.cpp : Defines the entry point for the console application. -// - -#include "stdafx.h" -#include "was/storage_account.h" -#include "was/table.h" -#include "was/queue.h" - -int _tmain(int argc, _TCHAR* argv[]) -{ - try - { - azure::storage::cloud_storage_account storage_account = azure::storage::cloud_storage_account::parse(U("DefaultEndpointsProtocol=https;AccountName=ACCOUNT_NAME;AccountKey=ACCOUNT_KEY")); - - azure::storage::cloud_table_client table_client = storage_account.create_cloud_table_client(); - azure::storage::cloud_table table = table_client.get_table_reference(U("blogposts")); - table.create_if_not_exists(); - - azure::storage::cloud_queue_client queue_client = storage_account.create_cloud_queue_client(); - azure::storage::cloud_queue queue = queue_client.get_queue_reference(U("blog-processing")); - queue.create_if_not_exists(); - - while (true) - { - utility::string_t name; - ucin >> name; - - - // Table - - azure::storage::table_batch_operation batch_operation; - for (int i = 0; i < 3; ++i) - { - utility::string_t partition_key = U("partition"); - utility::string_t row_key = name + utility::conversions::print_string(i); - - azure::storage::table_entity entity(partition_key, row_key); - entity.properties()[U("PostId")] = azure::storage::entity_property(rand()); - entity.properties()[U("Content")] = azure::storage::entity_property(utility::string_t(U("some text"))); - entity.properties()[U("Date")] = azure::storage::entity_property(utility::datetime::utc_now()); - batch_operation.insert_entity(entity); - } - - pplx::task table_task = table.execute_batch_async(batch_operation).then([](std::vector results) - { - for (auto it = results.cbegin(); it != results.cend(); ++it) - { - ucout << U("Status: ") << it->http_status_code() << std::endl; - } - }); - - - // Queue - - azure::storage::cloud_queue_message queue_message(name); - std::chrono::seconds time_to_live(100000); - std::chrono::seconds initial_visibility_timeout(rand() % 30); - - pplx::task queue_task = queue.add_message_async(queue_message, time_to_live, initial_visibility_timeout, azure::storage::queue_request_options(), azure::storage::operation_context()); - - - queue_task.wait(); - table_task.wait(); - } - } - catch (const azure::storage::storage_exception& e) - { - ucout << e.what() << std::endl; - } -} - diff --git a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/NativeClientLibraryDemo1.sln b/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/NativeClientLibraryDemo1.sln deleted file mode 100644 index ebe16bb1..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/NativeClientLibraryDemo1.sln +++ /dev/null @@ -1,22 +0,0 @@ - -Microsoft Visual Studio Solution File, Format Version 12.00 -# Visual Studio 2013 -VisualStudioVersion = 12.0.21005.1 -MinimumVisualStudioVersion = 10.0.40219.1 -Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "NativeClientLibraryDemo1", "NativeClientLibraryDemo1.vcxproj", "{3BA737F4-2630-4DA2-8061-3330D984DF43}" -EndProject -Global - GlobalSection(SolutionConfigurationPlatforms) = preSolution - Debug|Win32 = Debug|Win32 - Release|Win32 = Release|Win32 - EndGlobalSection - GlobalSection(ProjectConfigurationPlatforms) = postSolution - {3BA737F4-2630-4DA2-8061-3330D984DF43}.Debug|Win32.ActiveCfg = Debug|Win32 - {3BA737F4-2630-4DA2-8061-3330D984DF43}.Debug|Win32.Build.0 = Debug|Win32 - {3BA737F4-2630-4DA2-8061-3330D984DF43}.Release|Win32.ActiveCfg = Release|Win32 - {3BA737F4-2630-4DA2-8061-3330D984DF43}.Release|Win32.Build.0 = Release|Win32 - EndGlobalSection - GlobalSection(SolutionProperties) = preSolution - HideSolutionNode = FALSE - EndGlobalSection -EndGlobal diff --git a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/NativeClientLibraryDemo1.vcxproj b/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/NativeClientLibraryDemo1.vcxproj deleted file mode 100644 index d865b720..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/NativeClientLibraryDemo1.vcxproj +++ /dev/null @@ -1,117 +0,0 @@ - - - - - - Debug - Win32 - - - Release - Win32 - - - - {3BA737F4-2630-4DA2-8061-3330D984DF43} - Win32Proj - NativeClientLibraryDemo1 - - - - Application - true - v120 - Unicode - - - Application - false - v120 - true - Unicode - - - - - - - - - - - - 94e665ca - - - true - - - false - - - - Use - Level3 - Disabled - WIN32;_DEBUG;_CONSOLE;_LIB;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - - - Console - true - - - - - Level3 - Use - MaxSpeed - true - true - WIN32;NDEBUG;_CONSOLE;_LIB;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - - - Console - true - true - true - - - - - - - - - - - - - Create - Create - - - - - - - - - - - - - - - - - - This project references NuGet package(s) that are missing on this computer. Enable NuGet Package Restore to download them. For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}. - - - - - - - diff --git a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/NativeClientLibraryDemo1.vcxproj.filters b/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/NativeClientLibraryDemo1.vcxproj.filters deleted file mode 100644 index 5afb1c01..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/NativeClientLibraryDemo1.vcxproj.filters +++ /dev/null @@ -1,39 +0,0 @@ - - - - - {4FC737F1-C7A5-4376-A066-2A32D752A2FF} - cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx - - - {93995380-89BD-4b04-88EB-625FBE52EBFB} - h;hh;hpp;hxx;hm;inl;inc;xsd - - - {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} - rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms - - - - - - - - Header Files - - - Header Files - - - - - Source Files - - - Source Files - - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/ReadMe.txt b/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/ReadMe.txt deleted file mode 100644 index 5519d74b..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/ReadMe.txt +++ /dev/null @@ -1,40 +0,0 @@ -======================================================================== - CONSOLE APPLICATION : NativeClientLibraryDemo1 Project Overview -======================================================================== - -AppWizard has created this NativeClientLibraryDemo1 application for you. - -This file contains a summary of what you will find in each of the files that -make up your NativeClientLibraryDemo1 application. - - -NativeClientLibraryDemo1.vcxproj - This is the main project file for VC++ projects generated using an Application Wizard. - It contains information about the version of Visual C++ that generated the file, and - information about the platforms, configurations, and project features selected with the - Application Wizard. - -NativeClientLibraryDemo1.vcxproj.filters - This is the filters file for VC++ projects generated using an Application Wizard. - It contains information about the association between the files in your project - and the filters. This association is used in the IDE to show grouping of files with - similar extensions under a specific node (for e.g. ".cpp" files are associated with the - "Source Files" filter). - -NativeClientLibraryDemo1.cpp - This is the main application source file. - -///////////////////////////////////////////////////////////////////////////// -Other standard files: - -StdAfx.h, StdAfx.cpp - These files are used to build a precompiled header (PCH) file - named NativeClientLibraryDemo1.pch and a precompiled types file named StdAfx.obj. - -///////////////////////////////////////////////////////////////////////////// -Other notes: - -AppWizard uses "TODO:" comments to indicate parts of the source code you -should add to or customize. - -///////////////////////////////////////////////////////////////////////////// diff --git a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/packages.config b/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/packages.config deleted file mode 100644 index e3165151..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/packages.config +++ /dev/null @@ -1,6 +0,0 @@ - - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/stdafx.cpp b/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/stdafx.cpp deleted file mode 100644 index 555a4ed6..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/stdafx.cpp +++ /dev/null @@ -1,8 +0,0 @@ -// stdafx.cpp : source file that includes just the standard includes -// NativeClientLibraryDemo1.pch will be the pre-compiled header -// stdafx.obj will contain the pre-compiled type information - -#include "stdafx.h" - -// TODO: reference any additional headers you need in STDAFX.H -// and not in this file diff --git a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/stdafx.h b/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/stdafx.h deleted file mode 100644 index b005a839..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/stdafx.h +++ /dev/null @@ -1,15 +0,0 @@ -// stdafx.h : include file for standard system include files, -// or project specific include files that are used frequently, but -// are changed infrequently -// - -#pragma once - -#include "targetver.h" - -#include -#include - - - -// TODO: reference additional headers your program requires here diff --git a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/targetver.h b/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/targetver.h deleted file mode 100644 index 87c0086d..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo1/targetver.h +++ /dev/null @@ -1,8 +0,0 @@ -#pragma once - -// Including SDKDDKVer.h defines the highest available Windows platform. - -// If you wish to build your application for a previous Windows platform, include WinSDKVer.h and -// set the _WIN32_WINNT macro to the platform you wish to support before including SDKDDKVer.h. - -#include diff --git a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/NativeClientLibraryDemo2.cpp b/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/NativeClientLibraryDemo2.cpp deleted file mode 100644 index a89f4ca0..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/NativeClientLibraryDemo2.cpp +++ /dev/null @@ -1,65 +0,0 @@ -// NativeClientLibraryDemo2.cpp : Defines the entry point for the console application. -// - -#include "stdafx.h" -#include "was/storage_account.h" -#include "was/table.h" -#include "was/queue.h" - -int _tmain(int argc, _TCHAR* argv[]) -{ - try - { - azure::storage::cloud_storage_account storage_account = azure::storage::cloud_storage_account::parse(U("DefaultEndpointsProtocol=https;AccountName=ACCOUNT_NAME;AccountKey=ACCOUNT_KEY")); - - azure::storage::cloud_table_client table_client = storage_account.create_cloud_table_client(); - azure::storage::cloud_table table = table_client.get_table_reference(U("blogposts")); - table.create_if_not_exists(); - - azure::storage::cloud_queue_client queue_client = storage_account.create_cloud_queue_client(); - azure::storage::cloud_queue queue = queue_client.get_queue_reference(U("blog-processing")); - queue.create_if_not_exists(); - - while (true) - { - azure::storage::cloud_queue_message message = queue.get_message(); - if (!message.id().empty()) - { - utility::string_t partition_key(U("partition")); - utility::string_t start_row_key = message.content_as_string(); - utility::string_t end_row_key = message.content_as_string() + U(":"); - - azure::storage::table_query query; - query.set_filter_string(azure::storage::table_query::combine_filter_conditions( - azure::storage::table_query::combine_filter_conditions( - azure::storage::table_query::generate_filter_condition(U("PartitionKey"), azure::storage::query_comparison_operator::equal, partition_key), - azure::storage::query_logical_operator::op_and, - azure::storage::table_query::generate_filter_condition(U("RowKey"), azure::storage::query_comparison_operator::greater_than, start_row_key)), - azure::storage::query_logical_operator::op_and, - azure::storage::table_query::generate_filter_condition(U("RowKey"), azure::storage::query_comparison_operator::less_than, end_row_key)) - ); - - azure::storage::table_query_iterator it = table.execute_query(query); - azure::storage::table_query_iterator end_of_results; - for (; it != end_of_results; ++it) - { - ucout << U("Entity: ") << it->row_key() << U(" "); - ucout << it->properties().at(U("PostId")).int32_value() << U(" "); - ucout << it->properties().at(U("Content")).string_value() << std::endl; - } - - queue.delete_message(message); - } - - Sleep(1000); - } - - table.delete_table_if_exists(); - queue.delete_queue_if_exists(); - } - catch (const azure::storage::storage_exception& e) - { - ucout << e.what() << std::endl; - } -} - diff --git a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/NativeClientLibraryDemo2.sln b/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/NativeClientLibraryDemo2.sln deleted file mode 100644 index 495fb5ce..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/NativeClientLibraryDemo2.sln +++ /dev/null @@ -1,22 +0,0 @@ - -Microsoft Visual Studio Solution File, Format Version 12.00 -# Visual Studio 2013 -VisualStudioVersion = 12.0.21005.1 -MinimumVisualStudioVersion = 10.0.40219.1 -Project("{A210F7E6-0DE9-4D39-8B86-5F79C1D8CA7C}") = "NativeClientLibraryDemo2", "NativeClientLibraryDemo2.vcxproj", "{9999D237-9C22-4B26-8EE8-66686B47A707}" -EndProject -Global - GlobalSection(SolutionConfigurationPlatforms) = preSolution - Debug|Win32 = Debug|Win32 - Release|Win32 = Release|Win32 - EndGlobalSection - GlobalSection(ProjectConfigurationPlatforms) = postSolution - {9999D237-9C22-4B26-8EE8-66686B47A707}.Debug|Win32.ActiveCfg = Debug|Win32 - {9999D237-9C22-4B26-8EE8-66686B47A707}.Debug|Win32.Build.0 = Debug|Win32 - {9999D237-9C22-4B26-8EE8-66686B47A707}.Release|Win32.ActiveCfg = Release|Win32 - {9999D237-9C22-4B26-8EE8-66686B47A707}.Release|Win32.Build.0 = Release|Win32 - EndGlobalSection - GlobalSection(SolutionProperties) = preSolution - HideSolutionNode = FALSE - EndGlobalSection -EndGlobal diff --git a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/NativeClientLibraryDemo2.vcxproj b/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/NativeClientLibraryDemo2.vcxproj deleted file mode 100644 index addef342..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/NativeClientLibraryDemo2.vcxproj +++ /dev/null @@ -1,117 +0,0 @@ - - - - - - Debug - Win32 - - - Release - Win32 - - - - {9999D237-9C22-4B26-8EE8-66686B47A707} - Win32Proj - NativeClientLibraryDemo2 - - - - Application - true - v120 - Unicode - - - Application - false - v120 - true - Unicode - - - - - - - - - - - - dfadcfd8 - - - true - - - false - - - - Use - Level3 - Disabled - WIN32;_DEBUG;_CONSOLE;_LIB;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - - - Console - true - - - - - Level3 - Use - MaxSpeed - true - true - WIN32;NDEBUG;_CONSOLE;_LIB;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - - - Console - true - true - true - - - - - - - - - - - - - Create - Create - - - - - - - - - - - - - - - - - - This project references NuGet package(s) that are missing on this computer. Enable NuGet Package Restore to download them. For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}. - - - - - - - diff --git a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/NativeClientLibraryDemo2.vcxproj.filters b/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/NativeClientLibraryDemo2.vcxproj.filters deleted file mode 100644 index 41b91fd4..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/NativeClientLibraryDemo2.vcxproj.filters +++ /dev/null @@ -1,39 +0,0 @@ - - - - - {4FC737F1-C7A5-4376-A066-2A32D752A2FF} - cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx - - - {93995380-89BD-4b04-88EB-625FBE52EBFB} - h;hh;hpp;hxx;hm;inl;inc;xsd - - - {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} - rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms - - - - - - - - Header Files - - - Header Files - - - - - Source Files - - - Source Files - - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/ReadMe.txt b/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/ReadMe.txt deleted file mode 100644 index 3b1f9da8..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/ReadMe.txt +++ /dev/null @@ -1,40 +0,0 @@ -======================================================================== - CONSOLE APPLICATION : NativeClientLibraryDemo2 Project Overview -======================================================================== - -AppWizard has created this NativeClientLibraryDemo2 application for you. - -This file contains a summary of what you will find in each of the files that -make up your NativeClientLibraryDemo2 application. - - -NativeClientLibraryDemo2.vcxproj - This is the main project file for VC++ projects generated using an Application Wizard. - It contains information about the version of Visual C++ that generated the file, and - information about the platforms, configurations, and project features selected with the - Application Wizard. - -NativeClientLibraryDemo2.vcxproj.filters - This is the filters file for VC++ projects generated using an Application Wizard. - It contains information about the association between the files in your project - and the filters. This association is used in the IDE to show grouping of files with - similar extensions under a specific node (for e.g. ".cpp" files are associated with the - "Source Files" filter). - -NativeClientLibraryDemo2.cpp - This is the main application source file. - -///////////////////////////////////////////////////////////////////////////// -Other standard files: - -StdAfx.h, StdAfx.cpp - These files are used to build a precompiled header (PCH) file - named NativeClientLibraryDemo2.pch and a precompiled types file named StdAfx.obj. - -///////////////////////////////////////////////////////////////////////////// -Other notes: - -AppWizard uses "TODO:" comments to indicate parts of the source code you -should add to or customize. - -///////////////////////////////////////////////////////////////////////////// diff --git a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/packages.config b/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/packages.config deleted file mode 100644 index e3165151..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/packages.config +++ /dev/null @@ -1,6 +0,0 @@ - - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/stdafx.cpp b/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/stdafx.cpp deleted file mode 100644 index eb208ce5..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/stdafx.cpp +++ /dev/null @@ -1,8 +0,0 @@ -// stdafx.cpp : source file that includes just the standard includes -// NativeClientLibraryDemo2.pch will be the pre-compiled header -// stdafx.obj will contain the pre-compiled type information - -#include "stdafx.h" - -// TODO: reference any additional headers you need in STDAFX.H -// and not in this file diff --git a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/stdafx.h b/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/stdafx.h deleted file mode 100644 index b005a839..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/stdafx.h +++ /dev/null @@ -1,15 +0,0 @@ -// stdafx.h : include file for standard system include files, -// or project specific include files that are used frequently, but -// are changed infrequently -// - -#pragma once - -#include "targetver.h" - -#include -#include - - - -// TODO: reference additional headers your program requires here diff --git a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/targetver.h b/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/targetver.h deleted file mode 100644 index 87c0086d..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Channel9GoingNativeDemo2/targetver.h +++ /dev/null @@ -1,8 +0,0 @@ -#pragma once - -// Including SDKDDKVer.h defines the highest available Windows platform. - -// If you wish to build your application for a previous Windows platform, include WinSDKVer.h and -// set the _WIN32_WINNT macro to the platform you wish to support before including SDKDDKVer.h. - -#include diff --git a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/Application.cpp b/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted.cpp similarity index 93% rename from Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/Application.cpp rename to Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted.cpp index 9b282531..7358516f 100644 --- a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/Application.cpp +++ b/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted.cpp @@ -1,5 +1,5 @@ // ----------------------------------------------------------------------------------------- -// +// // Copyright 2013 Microsoft Corporation // // Licensed under the Apache License, Version 2.0 (the "License"); @@ -15,15 +15,16 @@ // // ----------------------------------------------------------------------------------------- -#include "stdafx.h" #include "samples_common.h" -#include "was/storage_account.h" -#include "was/file.h" -#include "cpprest/filestream.h" +#include +#include +#include + namespace azure { namespace storage { namespace samples { + SAMPLE(FilesGettingStarted, files_getting_started_sample) void files_getting_started_sample() { try @@ -115,11 +116,4 @@ namespace azure { namespace storage { namespace samples { } } -}}} // namespace azure::storage::samples - -int main(int argc, const char *argv[]) -{ - azure::storage::samples::files_getting_started_sample(); - return 0; -} - +}}} // namespace azure::storage::samples \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/CMakeLists.txt b/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/CMakeLists.txt deleted file mode 100644 index 35e50218..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/CMakeLists.txt +++ /dev/null @@ -1,14 +0,0 @@ -include_directories(. ${AZURESTORAGESAMPLES_INCLUDE_DIRS}) - -# THE ORDER OF FILES IS VERY /VERY/ IMPORTANT -if(UNIX) - set(SOURCES - Application.cpp - stdafx.cpp - ) -endif() - -buildsample(${AZURESTORAGESAMPLES_FILES} ${SOURCES}) - -# Copy test configuration to output directory -file(COPY DataFile.txt DESTINATION ${CMAKE_RUNTIME_OUTPUT_DIRECTORY}) diff --git a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/DataFile.txt b/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/DataFile.txt deleted file mode 100644 index 8fe2a4b5..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/DataFile.txt +++ /dev/null @@ -1 +0,0 @@ -The quick brown fox jumps over the lazy dog. \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/Microsoft.WindowsAzure.Storage.FilesGettingStarted.v120.vcxproj b/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/Microsoft.WindowsAzure.Storage.FilesGettingStarted.v120.vcxproj deleted file mode 100644 index 6cb7ea1a..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/Microsoft.WindowsAzure.Storage.FilesGettingStarted.v120.vcxproj +++ /dev/null @@ -1,122 +0,0 @@ - - - - - Debug - Win32 - - - Release - Win32 - - - - {CF7A5DAB-D260-4E82-971D-C69208F7C394} - MicrosoftWindowsAzureStorageFilesGettingStarted - - - - Application - true - v120 - Unicode - - - Application - false - v120 - true - Unicode - - - - - - - - - - - - 56d78138 - - - true - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - false - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - - Use - Level3 - Disabled - WIN32;_DEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - - - - - - Level3 - Use - MaxSpeed - true - true - WIN32;NDEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - true - true - - - - - - - - - Create - Create - - - - - {dcff75b0-b142-4ec8-992f-3e48f2e3eece} - true - true - false - true - false - - - {6412bfc8-d0f2-4a87-8c36-4efd77157859} - - - - - - - - - - - - This project references NuGet package(s) that are missing on this computer. Enable NuGet Package Restore to download them. For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}. - - - - diff --git a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/Microsoft.WindowsAzure.Storage.FilesGettingStarted.v120.vcxproj.filters b/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/Microsoft.WindowsAzure.Storage.FilesGettingStarted.v120.vcxproj.filters deleted file mode 100644 index 50b73c7f..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/Microsoft.WindowsAzure.Storage.FilesGettingStarted.v120.vcxproj.filters +++ /dev/null @@ -1,33 +0,0 @@ - - - - - {4FC737F1-C7A5-4376-A066-2A32D752A2FF} - cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx - - - {93995380-89BD-4b04-88EB-625FBE52EBFB} - h;hpp;hxx;hm;inl;inc;xsd - - - {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} - rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms - - - - - Header Files - - - - - Source Files - - - Source Files - - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/Microsoft.WindowsAzure.Storage.FilesGettingStarted.v140.vcxproj b/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/Microsoft.WindowsAzure.Storage.FilesGettingStarted.v140.vcxproj deleted file mode 100644 index aa458345..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/Microsoft.WindowsAzure.Storage.FilesGettingStarted.v140.vcxproj +++ /dev/null @@ -1,120 +0,0 @@ - - - - - Debug - Win32 - - - Release - Win32 - - - - {69B7A340-02D1-4685-9319-4804558321A1} - MicrosoftWindowsAzureStorageFilesGettingStarted - - - - Application - true - v140 - Unicode - - - Application - false - v140 - true - Unicode - - - - - - - - - - - - - true - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - false - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - - Use - Level3 - Disabled - WIN32;_DEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - - - - - - Level3 - Use - MaxSpeed - true - true - WIN32;NDEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - true - true - - - - - - - - - Create - Create - - - - - {25D342C3-6CDA-44DD-A16A-32A19B692785} - true - true - false - true - false - - - {2F5FA2E8-7F88-45EC-BDEF-8AF31B1F719C} - - - - - - - - - - - - This project references NuGet package(s) that are missing on this computer. Use NuGet Package Restore to download them. For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}. - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/Microsoft.WindowsAzure.Storage.FilesGettingStarted.v140.vcxproj.filters b/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/Microsoft.WindowsAzure.Storage.FilesGettingStarted.v140.vcxproj.filters deleted file mode 100644 index 50b73c7f..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/Microsoft.WindowsAzure.Storage.FilesGettingStarted.v140.vcxproj.filters +++ /dev/null @@ -1,33 +0,0 @@ - - - - - {4FC737F1-C7A5-4376-A066-2A32D752A2FF} - cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx - - - {93995380-89BD-4b04-88EB-625FBE52EBFB} - h;hpp;hxx;hm;inl;inc;xsd - - - {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} - rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms - - - - - Header Files - - - - - Source Files - - - Source Files - - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/packages.config b/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/packages.config deleted file mode 100644 index ebd30628..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/packages.config +++ /dev/null @@ -1,5 +0,0 @@ - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/stdafx.cpp b/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/stdafx.cpp deleted file mode 100644 index 4c691102..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/stdafx.cpp +++ /dev/null @@ -1,23 +0,0 @@ -// ----------------------------------------------------------------------------------------- -// -// Copyright 2013 Microsoft Corporation -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. -// -// ----------------------------------------------------------------------------------------- - -// stdafx.cpp : source file that includes just the standard includes -// ConsoleApplication1.pch will be the pre-compiled header -// stdafx.obj will contain the pre-compiled type information - -#include "stdafx.h" - diff --git a/Microsoft.WindowsAzure.Storage/samples/FilesProperties.cpp b/Microsoft.WindowsAzure.Storage/samples/FilesProperties.cpp new file mode 100644 index 00000000..2cc35b5b --- /dev/null +++ b/Microsoft.WindowsAzure.Storage/samples/FilesProperties.cpp @@ -0,0 +1,89 @@ +// ----------------------------------------------------------------------------------------- +// +// Copyright 2013 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +// +// ----------------------------------------------------------------------------------------- + +#include "samples_common.h" + +#include +#include + + +namespace azure { namespace storage { namespace samples { + + SAMPLE(FilesProperties, files_properties_sample) + void files_properties_sample() + { + try + { + // Initialize storage account + azure::storage::cloud_storage_account storage_account = azure::storage::cloud_storage_account::parse(storage_connection_string); + + // Create share + azure::storage::cloud_file_client file_client = storage_account.create_cloud_file_client(); + azure::storage::cloud_file_share share = file_client.get_share_reference(_XPLATSTR("my-sample-share")); + share.create_if_not_exists(); + + // azure-storage-cpp sdk treats permission as an opaque string. The string below is pretty much a default permission. + utility::string_t permission = _XPLATSTR("O:S-1-5-21-2127521184-1604012920-1887927527-21560751G:S-1-5-21-2127521184-1604012920-1887927527-513D:(A;;FA;;;SY)(A;;FA;;;BA)(A;;0x1200a9;;;S-1-5-21-397955417-626881126-188441444-3053964)"); + utility::string_t permission_key = share.upload_file_permission(permission); + + azure::storage::cloud_file_directory directory = share.get_directory_reference(_XPLATSTR("my-sample-directory")); + directory.delete_directory_if_exists(); + + // Create a new directory with properties. + directory.properties().set_attributes(azure::storage::cloud_file_attributes::directory | azure::storage::cloud_file_attributes::system); + directory.properties().set_creation_time(azure::storage::cloud_file_directory_properties::now); + directory.properties().set_last_write_time(utility::datetime::from_string(_XPLATSTR("Thu, 31 Oct 2019 06:42:18 GMT"))); + // You can specify either permission or permission key, but not both. + directory.properties().set_permission(permission); + //directory.properties().set_permission_key(permission_key); + directory.create(); + + // Upload a file. + azure::storage::cloud_file file = directory.get_file_reference(_XPLATSTR("my-sample-file-1")); + // Properties for file are pretty much the same as for directory. + // You can leave properties unset to use default. + file.properties().set_attributes(azure::storage::cloud_file_attributes::archive); + //file.properties().set_permission(azure::storage::cloud_file_properties::inherit); + file.properties().set_permission_key(permission_key); + file.upload_text(_XPLATSTR("some text")); + + // Update properties for an existing file/directory. + file.properties().set_creation_time(utility::datetime::from_string(_XPLATSTR("Wed, 10 Oct 2001 20:51:31 +0000"))); + file.upload_properties(); + + file.delete_file(); + directory.delete_directory(); + share.delete_share_if_exists(); + } + catch (const azure::storage::storage_exception& e) + { + ucout << _XPLATSTR("Error: ") << e.what() << std::endl; + + azure::storage::request_result result = e.result(); + azure::storage::storage_extended_error extended_error = result.extended_error(); + if (!extended_error.message().empty()) + { + ucout << extended_error.message() << std::endl; + } + } + catch (const std::exception& e) + { + ucout << _XPLATSTR("Error: ") << e.what() << std::endl; + } + } + +}}} // namespace azure::storage::samples diff --git a/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/Application.cpp b/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat.cpp similarity index 94% rename from Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/Application.cpp rename to Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat.cpp index afdecaf5..9201bb03 100644 --- a/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/Application.cpp +++ b/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat.cpp @@ -1,5 +1,5 @@ // ----------------------------------------------------------------------------------------- -// +// // Copyright 2013 Microsoft Corporation // // Licensed under the Apache License, Version 2.0 (the "License"); @@ -15,15 +15,16 @@ // // ----------------------------------------------------------------------------------------- -#include "stdafx.h" #include "samples_common.h" -#include "was/storage_account.h" -#include "was/table.h" +#include +#include + namespace azure { namespace storage { namespace samples { - void tables_getting_started_sample() + SAMPLE(TablesJsonPayload, tables_json_payload_sample) + void tables_json_payload_sample() { try { @@ -41,7 +42,7 @@ namespace azure { namespace storage { namespace samples { azure::storage::table_batch_operation batch_operation; for (int i = 0; i < 10; ++i) { - utility::string_t row_key = _XPLATSTR("MyRowKey") + utility::conversions::print_string(i); + utility::string_t row_key = _XPLATSTR("MyRowKey") + utility::conversions::to_string_t(std::to_string(i)); azure::storage::table_entity entity(_XPLATSTR("MyPartitionKey"), row_key); azure::storage::table_entity::properties_type& properties = entity.properties(); properties.reserve(8); @@ -115,11 +116,4 @@ namespace azure { namespace storage { namespace samples { } } -}}} // namespace azure::storage::samples - -int main(int argc, const char *argv[]) -{ - azure::storage::samples::tables_getting_started_sample(); - return 0; -} - +}}} // namespace azure::storage::samples diff --git a/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/CMakeLists.txt b/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/CMakeLists.txt deleted file mode 100644 index 610445eb..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/CMakeLists.txt +++ /dev/null @@ -1,11 +0,0 @@ -include_directories(. ${AZURESTORAGESAMPLES_INCLUDE_DIRS}) - -# THE ORDER OF FILES IS VERY /VERY/ IMPORTANT -if(UNIX) - set(SOURCES - Application.cpp - stdafx.cpp - ) -endif() - -buildsample(${AZURESTORAGESAMPLES_JSON} ${SOURCES}) \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/Microsoft.WindowsAzure.Storage.JsonPayloadFormat.v120.vcxproj b/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/Microsoft.WindowsAzure.Storage.JsonPayloadFormat.v120.vcxproj deleted file mode 100644 index fafe52e4..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/Microsoft.WindowsAzure.Storage.JsonPayloadFormat.v120.vcxproj +++ /dev/null @@ -1,122 +0,0 @@ - - - - - Debug - Win32 - - - Release - Win32 - - - - {94826A9A-3E5B-4798-8DC9-73CB41488064} - MicrosoftWindowsAzureStorageJsonPayloadFormat - - - - Application - true - v120 - Unicode - - - Application - false - v120 - true - Unicode - - - - - - - - - - - - f74538e0 - - - true - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - false - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - - Use - Level3 - Disabled - WIN32;_DEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - - - - - - Level3 - Use - MaxSpeed - true - true - WIN32;NDEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - true - true - - - - - - - - - Create - Create - - - - - {dcff75b0-b142-4ec8-992f-3e48f2e3eece} - true - true - false - true - false - - - {6412bfc8-d0f2-4a87-8c36-4efd77157859} - - - - - - - - - - - - This project references NuGet package(s) that are missing on this computer. Enable NuGet Package Restore to download them. For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}. - - - - diff --git a/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/Microsoft.WindowsAzure.Storage.JsonPayloadFormat.v120.vcxproj.filters b/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/Microsoft.WindowsAzure.Storage.JsonPayloadFormat.v120.vcxproj.filters deleted file mode 100644 index 50b73c7f..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/Microsoft.WindowsAzure.Storage.JsonPayloadFormat.v120.vcxproj.filters +++ /dev/null @@ -1,33 +0,0 @@ - - - - - {4FC737F1-C7A5-4376-A066-2A32D752A2FF} - cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx - - - {93995380-89BD-4b04-88EB-625FBE52EBFB} - h;hpp;hxx;hm;inl;inc;xsd - - - {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} - rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms - - - - - Header Files - - - - - Source Files - - - Source Files - - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/Microsoft.WindowsAzure.Storage.JsonPayloadFormat.v140.vcxproj b/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/Microsoft.WindowsAzure.Storage.JsonPayloadFormat.v140.vcxproj deleted file mode 100644 index 0b736f75..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/Microsoft.WindowsAzure.Storage.JsonPayloadFormat.v140.vcxproj +++ /dev/null @@ -1,120 +0,0 @@ - - - - - Debug - Win32 - - - Release - Win32 - - - - {BF42EBC0-0CEC-4053-8145-11917E03F7BD} - MicrosoftWindowsAzureStorageJsonPayloadFormat - - - - Application - true - v140 - Unicode - - - Application - false - v140 - true - Unicode - - - - - - - - - - - - - true - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - false - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - - Use - Level3 - Disabled - WIN32;_DEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - - - - - - Level3 - Use - MaxSpeed - true - true - WIN32;NDEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - true - true - - - - - - - - - Create - Create - - - - - {25D342C3-6CDA-44DD-A16A-32A19B692785} - true - true - false - true - false - - - {2F5FA2E8-7F88-45EC-BDEF-8AF31B1F719C} - - - - - - - - - - - - This project references NuGet package(s) that are missing on this computer. Use NuGet Package Restore to download them. For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}. - - - - diff --git a/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/Microsoft.WindowsAzure.Storage.JsonPayloadFormat.v140.vcxproj.filters b/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/Microsoft.WindowsAzure.Storage.JsonPayloadFormat.v140.vcxproj.filters deleted file mode 100644 index 50b73c7f..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/Microsoft.WindowsAzure.Storage.JsonPayloadFormat.v140.vcxproj.filters +++ /dev/null @@ -1,33 +0,0 @@ - - - - - {4FC737F1-C7A5-4376-A066-2A32D752A2FF} - cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx - - - {93995380-89BD-4b04-88EB-625FBE52EBFB} - h;hpp;hxx;hm;inl;inc;xsd - - - {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} - rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms - - - - - Header Files - - - - - Source Files - - - Source Files - - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/packages.config b/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/packages.config deleted file mode 100644 index ebd30628..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/packages.config +++ /dev/null @@ -1,5 +0,0 @@ - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/stdafx.cpp b/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/stdafx.cpp deleted file mode 100644 index 4c691102..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/stdafx.cpp +++ /dev/null @@ -1,23 +0,0 @@ -// ----------------------------------------------------------------------------------------- -// -// Copyright 2013 Microsoft Corporation -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. -// -// ----------------------------------------------------------------------------------------- - -// stdafx.cpp : source file that includes just the standard includes -// ConsoleApplication1.pch will be the pre-compiled header -// stdafx.obj will contain the pre-compiled type information - -#include "stdafx.h" - diff --git a/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/stdafx.h b/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/stdafx.h deleted file mode 100644 index e52a6520..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/JsonPayloadFormat/stdafx.h +++ /dev/null @@ -1,28 +0,0 @@ -// ----------------------------------------------------------------------------------------- -// -// Copyright 2013 Microsoft Corporation -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. -// -// ----------------------------------------------------------------------------------------- - -// stdafx.h : include file for standard system include files, -// or project specific include files that are used frequently, but -// are changed infrequently -// - -#pragma once - -#include "targetver.h" - -#define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers - diff --git a/Microsoft.WindowsAzure.Storage/samples/Microsoft.WindowsAzure.Storage.Samples.v120.sln b/Microsoft.WindowsAzure.Storage/samples/Microsoft.WindowsAzure.Storage.Samples.v120.sln deleted file mode 100644 index 1a85e730..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Microsoft.WindowsAzure.Storage.Samples.v120.sln +++ /dev/null @@ -1,75 +0,0 @@ -Microsoft Visual Studio Solution File, Format Version 12.00 -# Visual Studio 2013 -VisualStudioVersion = 12.0.30501.0 -MinimumVisualStudioVersion = 10.0.40219.1 -Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "Microsoft.WindowsAzure.Storage.TablesGettingStarted.v120", "TablesGettingStarted\Microsoft.WindowsAzure.Storage.TablesGettingStarted.v120.vcxproj", "{D52C867C-D624-456F-ADAC-D92A0C975404}" -EndProject -Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "Microsoft.WindowsAzure.Storage.SamplesCommon.v120", "SamplesCommon\Microsoft.WindowsAzure.Storage.SamplesCommon.v120.vcxproj", "{6412BFC8-D0F2-4A87-8C36-4EFD77157859}" -EndProject -Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "Microsoft.WindowsAzure.Storage.v120", "..\Microsoft.WindowsAzure.Storage.v120.vcxproj", "{DCFF75B0-B142-4EC8-992F-3E48F2E3EECE}" -EndProject -Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "Microsoft.WindowsAzure.Storage.BlobsGettingStarted.v120", "BlobsGettingStarted\Microsoft.WindowsAzure.Storage.BlobsGettingStarted.v120.vcxproj", "{234BE88B-26A3-46DB-A199-C9AD595E8076}" -EndProject -Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "Microsoft.WindowsAzure.Storage.QueuesGettingStarted.v120", "QueuesGettingStarted\Microsoft.WindowsAzure.Storage.QueuesGettingStarted.v120.vcxproj", "{EBA905B9-9EAB-4C8B-98D3-64A37AD3D8BC}" -EndProject -Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "Microsoft.WindowsAzure.Storage.JsonPayloadFormat.v120", "JsonPayloadFormat\Microsoft.WindowsAzure.Storage.JsonPayloadFormat.v120.vcxproj", "{94826A9A-3E5B-4798-8DC9-73CB41488064}" -EndProject -Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "Microsoft.WindowsAzure.Storage.FilesGettingStarted.v120", "FilesGettingStarted\Microsoft.WindowsAzure.Storage.FilesGettingStarted.v120.vcxproj", "{CF7A5DAB-D260-4E82-971D-C69208F7C394}" -EndProject -Global - GlobalSection(SolutionConfigurationPlatforms) = preSolution - Debug|Win32 = Debug|Win32 - Debug|x64 = Debug|x64 - Release|Win32 = Release|Win32 - Release|x64 = Release|x64 - EndGlobalSection - GlobalSection(ProjectConfigurationPlatforms) = postSolution - {D52C867C-D624-456F-ADAC-D92A0C975404}.Debug|Win32.ActiveCfg = Debug|Win32 - {D52C867C-D624-456F-ADAC-D92A0C975404}.Debug|Win32.Build.0 = Debug|Win32 - {D52C867C-D624-456F-ADAC-D92A0C975404}.Debug|x64.ActiveCfg = Debug|Win32 - {D52C867C-D624-456F-ADAC-D92A0C975404}.Release|Win32.ActiveCfg = Release|Win32 - {D52C867C-D624-456F-ADAC-D92A0C975404}.Release|Win32.Build.0 = Release|Win32 - {D52C867C-D624-456F-ADAC-D92A0C975404}.Release|x64.ActiveCfg = Release|Win32 - {6412BFC8-D0F2-4A87-8C36-4EFD77157859}.Debug|Win32.ActiveCfg = Debug|Win32 - {6412BFC8-D0F2-4A87-8C36-4EFD77157859}.Debug|Win32.Build.0 = Debug|Win32 - {6412BFC8-D0F2-4A87-8C36-4EFD77157859}.Debug|x64.ActiveCfg = Debug|Win32 - {6412BFC8-D0F2-4A87-8C36-4EFD77157859}.Release|Win32.ActiveCfg = Release|Win32 - {6412BFC8-D0F2-4A87-8C36-4EFD77157859}.Release|Win32.Build.0 = Release|Win32 - {6412BFC8-D0F2-4A87-8C36-4EFD77157859}.Release|x64.ActiveCfg = Release|Win32 - {DCFF75B0-B142-4EC8-992F-3E48F2E3EECE}.Debug|Win32.ActiveCfg = Debug|Win32 - {DCFF75B0-B142-4EC8-992F-3E48F2E3EECE}.Debug|Win32.Build.0 = Debug|Win32 - {DCFF75B0-B142-4EC8-992F-3E48F2E3EECE}.Debug|x64.ActiveCfg = Debug|x64 - {DCFF75B0-B142-4EC8-992F-3E48F2E3EECE}.Debug|x64.Build.0 = Debug|x64 - {DCFF75B0-B142-4EC8-992F-3E48F2E3EECE}.Release|Win32.ActiveCfg = Release|Win32 - {DCFF75B0-B142-4EC8-992F-3E48F2E3EECE}.Release|Win32.Build.0 = Release|Win32 - {DCFF75B0-B142-4EC8-992F-3E48F2E3EECE}.Release|x64.ActiveCfg = Release|x64 - {DCFF75B0-B142-4EC8-992F-3E48F2E3EECE}.Release|x64.Build.0 = Release|x64 - {234BE88B-26A3-46DB-A199-C9AD595E8076}.Debug|Win32.ActiveCfg = Debug|Win32 - {234BE88B-26A3-46DB-A199-C9AD595E8076}.Debug|Win32.Build.0 = Debug|Win32 - {234BE88B-26A3-46DB-A199-C9AD595E8076}.Debug|x64.ActiveCfg = Debug|Win32 - {234BE88B-26A3-46DB-A199-C9AD595E8076}.Release|Win32.ActiveCfg = Release|Win32 - {234BE88B-26A3-46DB-A199-C9AD595E8076}.Release|Win32.Build.0 = Release|Win32 - {234BE88B-26A3-46DB-A199-C9AD595E8076}.Release|x64.ActiveCfg = Release|Win32 - {EBA905B9-9EAB-4C8B-98D3-64A37AD3D8BC}.Debug|Win32.ActiveCfg = Debug|Win32 - {EBA905B9-9EAB-4C8B-98D3-64A37AD3D8BC}.Debug|Win32.Build.0 = Debug|Win32 - {EBA905B9-9EAB-4C8B-98D3-64A37AD3D8BC}.Debug|x64.ActiveCfg = Debug|Win32 - {EBA905B9-9EAB-4C8B-98D3-64A37AD3D8BC}.Release|Win32.ActiveCfg = Release|Win32 - {EBA905B9-9EAB-4C8B-98D3-64A37AD3D8BC}.Release|Win32.Build.0 = Release|Win32 - {EBA905B9-9EAB-4C8B-98D3-64A37AD3D8BC}.Release|x64.ActiveCfg = Release|Win32 - {94826A9A-3E5B-4798-8DC9-73CB41488064}.Debug|Win32.ActiveCfg = Debug|Win32 - {94826A9A-3E5B-4798-8DC9-73CB41488064}.Debug|Win32.Build.0 = Debug|Win32 - {94826A9A-3E5B-4798-8DC9-73CB41488064}.Debug|x64.ActiveCfg = Debug|Win32 - {94826A9A-3E5B-4798-8DC9-73CB41488064}.Release|Win32.ActiveCfg = Release|Win32 - {94826A9A-3E5B-4798-8DC9-73CB41488064}.Release|Win32.Build.0 = Release|Win32 - {94826A9A-3E5B-4798-8DC9-73CB41488064}.Release|x64.ActiveCfg = Release|Win32 - {CF7A5DAB-D260-4E82-971D-C69208F7C394}.Debug|Win32.ActiveCfg = Debug|Win32 - {CF7A5DAB-D260-4E82-971D-C69208F7C394}.Debug|Win32.Build.0 = Debug|Win32 - {CF7A5DAB-D260-4E82-971D-C69208F7C394}.Debug|x64.ActiveCfg = Debug|Win32 - {CF7A5DAB-D260-4E82-971D-C69208F7C394}.Release|Win32.ActiveCfg = Release|Win32 - {CF7A5DAB-D260-4E82-971D-C69208F7C394}.Release|Win32.Build.0 = Release|Win32 - {CF7A5DAB-D260-4E82-971D-C69208F7C394}.Release|x64.ActiveCfg = Release|Win32 - EndGlobalSection - GlobalSection(SolutionProperties) = preSolution - HideSolutionNode = FALSE - EndGlobalSection -EndGlobal diff --git a/Microsoft.WindowsAzure.Storage/samples/Microsoft.WindowsAzure.Storage.Samples.v140.sln b/Microsoft.WindowsAzure.Storage/samples/Microsoft.WindowsAzure.Storage.Samples.v140.sln deleted file mode 100644 index 33bdfcc3..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/Microsoft.WindowsAzure.Storage.Samples.v140.sln +++ /dev/null @@ -1,75 +0,0 @@ -Microsoft Visual Studio Solution File, Format Version 12.00 -# Visual Studio 14 -VisualStudioVersion = 14.0.23107.0 -MinimumVisualStudioVersion = 10.0.40219.1 -Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "Microsoft.WindowsAzure.Storage.TablesGettingStarted.v140", "TablesGettingStarted\Microsoft.WindowsAzure.Storage.TablesGettingStarted.v140.vcxproj", "{A99AA81C-3952-465C-AD9B-3F0A940DB30D}" -EndProject -Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "Microsoft.WindowsAzure.Storage.SamplesCommon.v140", "SamplesCommon\Microsoft.WindowsAzure.Storage.SamplesCommon.v140.vcxproj", "{2F5FA2E8-7F88-45EC-BDEF-8AF31B1F719C}" -EndProject -Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "Microsoft.WindowsAzure.Storage.v140", "..\Microsoft.WindowsAzure.Storage.v140.vcxproj", "{25D342C3-6CDA-44DD-A16A-32A19B692785}" -EndProject -Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "Microsoft.WindowsAzure.Storage.BlobsGettingStarted.v140", "BlobsGettingStarted\Microsoft.WindowsAzure.Storage.BlobsGettingStarted.v140.vcxproj", "{F2E0AF2E-8517-4E2B-85AA-701F7EBA6130}" -EndProject -Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "Microsoft.WindowsAzure.Storage.QueuesGettingStarted.v140", "QueuesGettingStarted\Microsoft.WindowsAzure.Storage.QueuesGettingStarted.v140.vcxproj", "{DD2A4B3A-5D82-40A6-9F8F-17503EC6E326}" -EndProject -Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "Microsoft.WindowsAzure.Storage.JsonPayloadFormat.v140", "JsonPayloadFormat\Microsoft.WindowsAzure.Storage.JsonPayloadFormat.v140.vcxproj", "{BF42EBC0-0CEC-4053-8145-11917E03F7BD}" -EndProject -Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "Microsoft.WindowsAzure.Storage.FilesGettingStarted.v140", "FilesGettingStarted\Microsoft.WindowsAzure.Storage.FilesGettingStarted.v140.vcxproj", "{69B7A340-02D1-4685-9319-4804558321A1}" -EndProject -Global - GlobalSection(SolutionConfigurationPlatforms) = preSolution - Debug|Win32 = Debug|Win32 - Debug|x64 = Debug|x64 - Release|Win32 = Release|Win32 - Release|x64 = Release|x64 - EndGlobalSection - GlobalSection(ProjectConfigurationPlatforms) = postSolution - {A99AA81C-3952-465C-AD9B-3F0A940DB30D}.Debug|Win32.ActiveCfg = Debug|Win32 - {A99AA81C-3952-465C-AD9B-3F0A940DB30D}.Debug|Win32.Build.0 = Debug|Win32 - {A99AA81C-3952-465C-AD9B-3F0A940DB30D}.Debug|x64.ActiveCfg = Debug|Win32 - {A99AA81C-3952-465C-AD9B-3F0A940DB30D}.Release|Win32.ActiveCfg = Release|Win32 - {A99AA81C-3952-465C-AD9B-3F0A940DB30D}.Release|Win32.Build.0 = Release|Win32 - {A99AA81C-3952-465C-AD9B-3F0A940DB30D}.Release|x64.ActiveCfg = Release|Win32 - {2F5FA2E8-7F88-45EC-BDEF-8AF31B1F719C}.Debug|Win32.ActiveCfg = Debug|Win32 - {2F5FA2E8-7F88-45EC-BDEF-8AF31B1F719C}.Debug|Win32.Build.0 = Debug|Win32 - {2F5FA2E8-7F88-45EC-BDEF-8AF31B1F719C}.Debug|x64.ActiveCfg = Debug|Win32 - {2F5FA2E8-7F88-45EC-BDEF-8AF31B1F719C}.Release|Win32.ActiveCfg = Release|Win32 - {2F5FA2E8-7F88-45EC-BDEF-8AF31B1F719C}.Release|Win32.Build.0 = Release|Win32 - {2F5FA2E8-7F88-45EC-BDEF-8AF31B1F719C}.Release|x64.ActiveCfg = Release|Win32 - {25D342C3-6CDA-44DD-A16A-32A19B692785}.Debug|Win32.ActiveCfg = Debug|Win32 - {25D342C3-6CDA-44DD-A16A-32A19B692785}.Debug|Win32.Build.0 = Debug|Win32 - {25D342C3-6CDA-44DD-A16A-32A19B692785}.Debug|x64.ActiveCfg = Debug|x64 - {25D342C3-6CDA-44DD-A16A-32A19B692785}.Debug|x64.Build.0 = Debug|x64 - {25D342C3-6CDA-44DD-A16A-32A19B692785}.Release|Win32.ActiveCfg = Release|Win32 - {25D342C3-6CDA-44DD-A16A-32A19B692785}.Release|Win32.Build.0 = Release|Win32 - {25D342C3-6CDA-44DD-A16A-32A19B692785}.Release|x64.ActiveCfg = Release|x64 - {25D342C3-6CDA-44DD-A16A-32A19B692785}.Release|x64.Build.0 = Release|x64 - {F2E0AF2E-8517-4E2B-85AA-701F7EBA6130}.Debug|Win32.ActiveCfg = Debug|Win32 - {F2E0AF2E-8517-4E2B-85AA-701F7EBA6130}.Debug|Win32.Build.0 = Debug|Win32 - {F2E0AF2E-8517-4E2B-85AA-701F7EBA6130}.Debug|x64.ActiveCfg = Debug|Win32 - {F2E0AF2E-8517-4E2B-85AA-701F7EBA6130}.Release|Win32.ActiveCfg = Release|Win32 - {F2E0AF2E-8517-4E2B-85AA-701F7EBA6130}.Release|Win32.Build.0 = Release|Win32 - {F2E0AF2E-8517-4E2B-85AA-701F7EBA6130}.Release|x64.ActiveCfg = Release|Win32 - {DD2A4B3A-5D82-40A6-9F8F-17503EC6E326}.Debug|Win32.ActiveCfg = Debug|Win32 - {DD2A4B3A-5D82-40A6-9F8F-17503EC6E326}.Debug|Win32.Build.0 = Debug|Win32 - {DD2A4B3A-5D82-40A6-9F8F-17503EC6E326}.Debug|x64.ActiveCfg = Debug|Win32 - {DD2A4B3A-5D82-40A6-9F8F-17503EC6E326}.Release|Win32.ActiveCfg = Release|Win32 - {DD2A4B3A-5D82-40A6-9F8F-17503EC6E326}.Release|Win32.Build.0 = Release|Win32 - {DD2A4B3A-5D82-40A6-9F8F-17503EC6E326}.Release|x64.ActiveCfg = Release|Win32 - {BF42EBC0-0CEC-4053-8145-11917E03F7BD}.Debug|Win32.ActiveCfg = Debug|Win32 - {BF42EBC0-0CEC-4053-8145-11917E03F7BD}.Debug|Win32.Build.0 = Debug|Win32 - {BF42EBC0-0CEC-4053-8145-11917E03F7BD}.Debug|x64.ActiveCfg = Debug|Win32 - {BF42EBC0-0CEC-4053-8145-11917E03F7BD}.Release|Win32.ActiveCfg = Release|Win32 - {BF42EBC0-0CEC-4053-8145-11917E03F7BD}.Release|Win32.Build.0 = Release|Win32 - {BF42EBC0-0CEC-4053-8145-11917E03F7BD}.Release|x64.ActiveCfg = Release|Win32 - {69B7A340-02D1-4685-9319-4804558321A1}.Debug|Win32.ActiveCfg = Debug|Win32 - {69B7A340-02D1-4685-9319-4804558321A1}.Debug|Win32.Build.0 = Debug|Win32 - {69B7A340-02D1-4685-9319-4804558321A1}.Debug|x64.ActiveCfg = Debug|Win32 - {69B7A340-02D1-4685-9319-4804558321A1}.Release|Win32.ActiveCfg = Release|Win32 - {69B7A340-02D1-4685-9319-4804558321A1}.Release|Win32.Build.0 = Release|Win32 - {69B7A340-02D1-4685-9319-4804558321A1}.Release|x64.ActiveCfg = Release|Win32 - EndGlobalSection - GlobalSection(SolutionProperties) = preSolution - HideSolutionNode = FALSE - EndGlobalSection -EndGlobal diff --git a/Microsoft.WindowsAzure.Storage/samples/Microsoft.WindowsAzure.Storage.Samples.v140.vcxproj b/Microsoft.WindowsAzure.Storage/samples/Microsoft.WindowsAzure.Storage.Samples.v140.vcxproj new file mode 100644 index 00000000..5d2b7989 --- /dev/null +++ b/Microsoft.WindowsAzure.Storage/samples/Microsoft.WindowsAzure.Storage.Samples.v140.vcxproj @@ -0,0 +1,161 @@ + + + + + Debug + Win32 + + + Release + Win32 + + + Debug + x64 + + + Release + x64 + + + + {B6FBFB60-112B-4A38-B39A-12DB18876B1C} + MicrosoftWindowsAzureStorageSamples + 8.1 + + + + Application + true + v140 + Unicode + + + Application + false + v140 + true + Unicode + + + Application + true + v140 + Unicode + + + Application + false + v140 + true + Unicode + + + + + + + + + + + + + + + + + + + + + $(ProjectDir)..\$(PlatformToolset)\$(Platform)\$(Configuration)\ + wastoresample + + + $(ProjectDir)..\$(PlatformToolset)\$(Platform)\$(Configuration)\ + wastoresample + + + $(ProjectDir)..\$(PlatformToolset)\$(Platform)\$(Configuration)\ + wastoresample + + + $(ProjectDir)..\$(PlatformToolset)\$(Platform)\$(Configuration)\ + wastoresample + + + + Level3 + Disabled + true + true + ..\includes;%(AdditionalIncludeDirectories) + true + + + + + Level3 + Disabled + true + true + ..\includes;%(AdditionalIncludeDirectories) + true + + + + + Level3 + MaxSpeed + true + true + true + true + ..\includes;%(AdditionalIncludeDirectories) + true + + + true + true + + + + + Level3 + MaxSpeed + true + true + true + true + ..\includes;%(AdditionalIncludeDirectories) + true + + + true + true + + + + + + + + + + + + + + + + + + + {a8e200a6-910e-44f4-9e8e-c37e45b7ad42} + + + + + + \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/Microsoft.WindowsAzure.Storage.Samples.v140.vcxproj.filters b/Microsoft.WindowsAzure.Storage/samples/Microsoft.WindowsAzure.Storage.Samples.v140.vcxproj.filters new file mode 100644 index 00000000..5aef418c --- /dev/null +++ b/Microsoft.WindowsAzure.Storage/samples/Microsoft.WindowsAzure.Storage.Samples.v140.vcxproj.filters @@ -0,0 +1,17 @@ + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/Microsoft.WindowsAzure.Storage.Samples.v141.vcxproj b/Microsoft.WindowsAzure.Storage/samples/Microsoft.WindowsAzure.Storage.Samples.v141.vcxproj new file mode 100644 index 00000000..47cd0966 --- /dev/null +++ b/Microsoft.WindowsAzure.Storage/samples/Microsoft.WindowsAzure.Storage.Samples.v141.vcxproj @@ -0,0 +1,163 @@ + + + + + Debug + Win32 + + + Release + Win32 + + + Debug + x64 + + + Release + x64 + + + + 15.0 + {B6FBFB60-112B-4A38-B39A-12DB18876B1C} + MicrosoftWindowsAzureStorageSamples + 8.1 + + + + Application + true + v141 + Unicode + + + Application + false + v141 + true + Unicode + + + Application + true + v141 + Unicode + + + Application + false + v141 + true + Unicode + + + + + + + + + + + + + + + + + + + + + $(ProjectDir)..\$(PlatformToolset)\$(Platform)\$(Configuration)\ + wastoresample + + + $(ProjectDir)..\$(PlatformToolset)\$(Platform)\$(Configuration)\ + wastoresample + + + $(ProjectDir)..\$(PlatformToolset)\$(Platform)\$(Configuration)\ + wastoresample + + + $(ProjectDir)..\$(PlatformToolset)\$(Platform)\$(Configuration)\ + wastoresample + + + + Level3 + Disabled + true + true + ..\includes;%(AdditionalIncludeDirectories) + true + + + + + Level3 + Disabled + true + true + ..\includes;%(AdditionalIncludeDirectories) + true + + + + + Level3 + MaxSpeed + true + true + true + true + ..\includes;%(AdditionalIncludeDirectories) + true + + + true + true + + + + + Level3 + MaxSpeed + true + true + true + true + ..\includes;%(AdditionalIncludeDirectories) + true + + + true + true + + + + + + + + + + + + + + + + + + + + {a8e200a6-910e-44f4-9e8e-c37e45b7ad42} + + + + + + \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/Microsoft.WindowsAzure.Storage.Samples.v141.vcxproj.filters b/Microsoft.WindowsAzure.Storage/samples/Microsoft.WindowsAzure.Storage.Samples.v141.vcxproj.filters new file mode 100644 index 00000000..35909892 --- /dev/null +++ b/Microsoft.WindowsAzure.Storage/samples/Microsoft.WindowsAzure.Storage.Samples.v141.vcxproj.filters @@ -0,0 +1,18 @@ + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/NativeClientLibraryDemo1.cpp b/Microsoft.WindowsAzure.Storage/samples/NativeClientLibraryDemo1.cpp new file mode 100644 index 00000000..b63efde4 --- /dev/null +++ b/Microsoft.WindowsAzure.Storage/samples/NativeClientLibraryDemo1.cpp @@ -0,0 +1,90 @@ +// ----------------------------------------------------------------------------------------- +// +// Copyright 2019 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +// +// ----------------------------------------------------------------------------------------- + +#include "samples_common.h" + +#include +#include +#include + + +namespace azure { namespace storage { namespace samples { + + SAMPLE(Channel9GoingNativeDemo1, channel9_going_native_demo1) + void channel9_going_native_demo1() { + try + { + azure::storage::cloud_storage_account storage_account = azure::storage::cloud_storage_account::parse(storage_connection_string); + + azure::storage::cloud_table_client table_client = storage_account.create_cloud_table_client(); + azure::storage::cloud_table table = table_client.get_table_reference(_XPLATSTR("blogposts")); + table.create_if_not_exists(); + + azure::storage::cloud_queue_client queue_client = storage_account.create_cloud_queue_client(); + azure::storage::cloud_queue queue = queue_client.get_queue_reference(_XPLATSTR("blog-processing")); + queue.create_if_not_exists(); + + while (true) + { + utility::string_t name; + ucin >> name; + + + // Table + + azure::storage::table_batch_operation batch_operation; + for (int i = 0; i < 3; ++i) + { + utility::string_t partition_key = _XPLATSTR("partition"); + utility::string_t row_key = name + utility::conversions::to_string_t(std::to_string(i)); + + azure::storage::table_entity entity(partition_key, row_key); + entity.properties()[_XPLATSTR("PostId")] = azure::storage::entity_property(rand()); + entity.properties()[_XPLATSTR("Content")] = azure::storage::entity_property(utility::string_t(_XPLATSTR("some text"))); + entity.properties()[_XPLATSTR("Date")] = azure::storage::entity_property(utility::datetime::utc_now()); + batch_operation.insert_entity(entity); + } + + pplx::task table_task = table.execute_batch_async(batch_operation).then([](std::vector results) + { + for (auto it = results.cbegin(); it != results.cend(); ++it) + { + ucout << _XPLATSTR("Status: ") << it->http_status_code() << std::endl; + } + }); + + + // Queue + + azure::storage::cloud_queue_message queue_message(name); + std::chrono::seconds time_to_live(100000); + std::chrono::seconds initial_visibility_timeout(rand() % 30); + azure::storage::queue_request_options options; + + pplx::task queue_task = queue.add_message_async(queue_message, time_to_live, initial_visibility_timeout, options, azure::storage::operation_context()); + + + queue_task.wait(); + table_task.wait(); + } + } + catch (const azure::storage::storage_exception& e) + { + ucout << e.what() << std::endl; + } + } +}}} // namespace azure::storage::samples diff --git a/Microsoft.WindowsAzure.Storage/samples/NativeClientLibraryDemo2.cpp b/Microsoft.WindowsAzure.Storage/samples/NativeClientLibraryDemo2.cpp new file mode 100644 index 00000000..2eb64dfe --- /dev/null +++ b/Microsoft.WindowsAzure.Storage/samples/NativeClientLibraryDemo2.cpp @@ -0,0 +1,86 @@ +// ----------------------------------------------------------------------------------------- +// +// Copyright 2019 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +// +// ----------------------------------------------------------------------------------------- + +#include "samples_common.h" + +#include +#include + +#include +#include +#include + + +namespace azure { namespace storage { namespace samples { + + SAMPLE(Channel9GoingNativeDemo2, channel9_going_native_demo2) + void channel9_going_native_demo2() { + try + { + azure::storage::cloud_storage_account storage_account = azure::storage::cloud_storage_account::parse(storage_connection_string); + + azure::storage::cloud_table_client table_client = storage_account.create_cloud_table_client(); + azure::storage::cloud_table table = table_client.get_table_reference(_XPLATSTR("blogposts")); + table.create_if_not_exists(); + + azure::storage::cloud_queue_client queue_client = storage_account.create_cloud_queue_client(); + azure::storage::cloud_queue queue = queue_client.get_queue_reference(_XPLATSTR("blog-processing")); + queue.create_if_not_exists(); + + while (true) + { + azure::storage::cloud_queue_message message = queue.get_message(); + if (!message.id().empty()) + { + utility::string_t partition_key(_XPLATSTR("partition")); + utility::string_t start_row_key = message.content_as_string(); + utility::string_t end_row_key = message.content_as_string() + _XPLATSTR(":"); + + azure::storage::table_query query; + query.set_filter_string(azure::storage::table_query::combine_filter_conditions( + azure::storage::table_query::combine_filter_conditions( + azure::storage::table_query::generate_filter_condition(_XPLATSTR("PartitionKey"), azure::storage::query_comparison_operator::equal, partition_key), + azure::storage::query_logical_operator::op_and, + azure::storage::table_query::generate_filter_condition(_XPLATSTR("RowKey"), azure::storage::query_comparison_operator::greater_than, start_row_key)), + azure::storage::query_logical_operator::op_and, + azure::storage::table_query::generate_filter_condition(_XPLATSTR("RowKey"), azure::storage::query_comparison_operator::less_than, end_row_key)) + ); + + azure::storage::table_query_iterator it = table.execute_query(query); + azure::storage::table_query_iterator end_of_results; + for (; it != end_of_results; ++it) + { + ucout << _XPLATSTR("Entity: ") << it->row_key() << _XPLATSTR(" "); + ucout << it->properties().at(_XPLATSTR("PostId")).int32_value() << _XPLATSTR(" "); + ucout << it->properties().at(_XPLATSTR("Content")).string_value() << std::endl; + } + + queue.delete_message(message); + } + + std::this_thread::sleep_for(std::chrono::milliseconds(1000)); + } + + table.delete_table_if_exists(); + queue.delete_queue_if_exists(); + } + catch (const azure::storage::storage_exception& e) + { + ucout << e.what() << std::endl; + } + } +}}} // namespace azure::storage::samples diff --git a/Microsoft.WindowsAzure.Storage/samples/OAuthGettingStarted.cpp b/Microsoft.WindowsAzure.Storage/samples/OAuthGettingStarted.cpp new file mode 100644 index 00000000..a3b46427 --- /dev/null +++ b/Microsoft.WindowsAzure.Storage/samples/OAuthGettingStarted.cpp @@ -0,0 +1,75 @@ +// ----------------------------------------------------------------------------------------- +// +// Copyright 2019 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +// +// ----------------------------------------------------------------------------------------- + +#include "samples_common.h" + +#include +#include + + +namespace azure { namespace storage { namespace samples { + + SAMPLE(OAuthGettingStarted, oauth_getting_started_sample) + void oauth_getting_started_sample() + { + utility::string_t account_name = _XPLATSTR("YOUR ACCOUNT_NAME"); + utility::string_t oauth_access_token(_XPLATSTR("PUT_YOUR_OAUTH_2_0_ACCESS_TOKEN_HERE")); + + using OAuthAccessToken = azure::storage::storage_credentials::bearer_token_credential; + azure::storage::storage_credentials storage_cred(account_name, OAuthAccessToken{ oauth_access_token }); + + azure::storage::cloud_storage_account storage_account(storage_cred, /* use https */ true); + + auto blob_client = storage_account.create_cloud_blob_client(); + auto blob_container = blob_client.get_container_reference(_XPLATSTR("YOUR_CONTAINER")); + auto blob = blob_container.get_blob_reference(_XPLATSTR("FOO.BAR")); + + try + { + std::cout << blob.exists() << std::endl; + } + catch (const azure::storage::storage_exception& e) + { + std::cout << e.what() << std::endl; + } + + // Here we make some copies of the storage credential. + azure::storage::storage_credentials storage_cred2(storage_cred); + azure::storage::storage_credentials storage_cred3; + storage_cred3 = storage_cred2; + azure::storage::storage_credentials storage_cred4(OAuthAccessToken{ oauth_access_token }); + + // After a while, the access token may expire, refresh it. + storage_cred.set_bearer_token(_XPLATSTR("YOUR_NEW_OAUTH_2_0_ACCESS_TOKEN")); + // storage_cred2.set_bearer_token(_XPLATSTR("YOUR_NEW_OAUTH_2_0_ACCESS_TOKEN")); + // storage_cred3.set_bearer_token(_XPLATSTR("YOUR_NEW_OAUTH_2_0_ACCESS_TOKEN")); + // Note that, every storage_crentials struct copied directly or indirectly shares the same underlaying access token. + // So the three lines above have the same effect. + + // But if you create another storage_crendials with the same access token, they are not interconnected because they are not created by coping or assigning. + storage_cred4.set_bearer_token(_XPLATSTR("YOUR_NEW_OAUTH_2_0_ACCESS_TOKEN")); // This doesn't change access token inside storage_cred{1,2,3} + + try + { + std::cout << blob.exists() << std::endl; + } + catch (const azure::storage::storage_exception& e) + { + std::cout << e.what() << std::endl; + } + } +}}} // namespace azure::storage::samples \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/Application.cpp b/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted.cpp similarity index 94% rename from Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/Application.cpp rename to Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted.cpp index c2f6d10a..a7d635b0 100644 --- a/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/Application.cpp +++ b/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted.cpp @@ -1,5 +1,5 @@ // ----------------------------------------------------------------------------------------- -// +// // Copyright 2013 Microsoft Corporation // // Licensed under the Apache License, Version 2.0 (the "License"); @@ -15,14 +15,15 @@ // // ----------------------------------------------------------------------------------------- -#include "stdafx.h" #include "samples_common.h" -#include "was/storage_account.h" -#include "was/queue.h" +#include +#include + namespace azure { namespace storage { namespace samples { + SAMPLE(QueuesGettingStarted, queues_getting_started_sample) void queues_getting_started_sample() { try @@ -101,11 +102,4 @@ namespace azure { namespace storage { namespace samples { } } -}}} // namespace azure::storage::samples - -int main(int argc, const char *argv[]) -{ - azure::storage::samples::queues_getting_started_sample(); - return 0; -} - +}}} // namespace azure::storage::samples \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/CMakeLists.txt b/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/CMakeLists.txt deleted file mode 100644 index 258e1934..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/CMakeLists.txt +++ /dev/null @@ -1,11 +0,0 @@ -include_directories(. ${AZURESTORAGESAMPLES_INCLUDE_DIRS}) - -# THE ORDER OF FILES IS VERY /VERY/ IMPORTANT -if(UNIX) - set(SOURCES - Application.cpp - stdafx.cpp - ) -endif() - -buildsample(${AZURESTORAGESAMPLES_QUEUES} ${SOURCES}) \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/Microsoft.WindowsAzure.Storage.QueuesGettingStarted.v120.vcxproj b/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/Microsoft.WindowsAzure.Storage.QueuesGettingStarted.v120.vcxproj deleted file mode 100644 index 05a6c4f8..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/Microsoft.WindowsAzure.Storage.QueuesGettingStarted.v120.vcxproj +++ /dev/null @@ -1,122 +0,0 @@ - - - - - Debug - Win32 - - - Release - Win32 - - - - {EBA905B9-9EAB-4C8B-98D3-64A37AD3D8BC} - MicrosoftWindowsAzureStorageQueuesGettingStarted - - - - Application - true - v120 - Unicode - - - Application - false - v120 - true - Unicode - - - - - - - - - - - - 56d78138 - - - true - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - false - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - - Use - Level3 - Disabled - WIN32;_DEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - - - - - - Level3 - Use - MaxSpeed - true - true - WIN32;NDEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - true - true - - - - - - - - - Create - Create - - - - - {dcff75b0-b142-4ec8-992f-3e48f2e3eece} - true - true - false - true - false - - - {6412bfc8-d0f2-4a87-8c36-4efd77157859} - - - - - - - - - - - - This project references NuGet package(s) that are missing on this computer. Enable NuGet Package Restore to download them. For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}. - - - - diff --git a/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/Microsoft.WindowsAzure.Storage.QueuesGettingStarted.v120.vcxproj.filters b/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/Microsoft.WindowsAzure.Storage.QueuesGettingStarted.v120.vcxproj.filters deleted file mode 100644 index 50b73c7f..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/Microsoft.WindowsAzure.Storage.QueuesGettingStarted.v120.vcxproj.filters +++ /dev/null @@ -1,33 +0,0 @@ - - - - - {4FC737F1-C7A5-4376-A066-2A32D752A2FF} - cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx - - - {93995380-89BD-4b04-88EB-625FBE52EBFB} - h;hpp;hxx;hm;inl;inc;xsd - - - {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} - rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms - - - - - Header Files - - - - - Source Files - - - Source Files - - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/Microsoft.WindowsAzure.Storage.QueuesGettingStarted.v140.vcxproj b/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/Microsoft.WindowsAzure.Storage.QueuesGettingStarted.v140.vcxproj deleted file mode 100644 index 9dff21af..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/Microsoft.WindowsAzure.Storage.QueuesGettingStarted.v140.vcxproj +++ /dev/null @@ -1,120 +0,0 @@ - - - - - Debug - Win32 - - - Release - Win32 - - - - {DD2A4B3A-5D82-40A6-9F8F-17503EC6E326} - MicrosoftWindowsAzureStorageQueuesGettingStarted - - - - Application - true - v140 - Unicode - - - Application - false - v140 - true - Unicode - - - - - - - - - - - - - true - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - false - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - - Use - Level3 - Disabled - WIN32;_DEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - - - - - - Level3 - Use - MaxSpeed - true - true - WIN32;NDEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - true - true - - - - - - - - - Create - Create - - - - - {25D342C3-6CDA-44DD-A16A-32A19B692785} - true - true - false - true - false - - - {2F5FA2E8-7F88-45EC-BDEF-8AF31B1F719C} - - - - - - - - - - - - This project references NuGet package(s) that are missing on this computer. Use NuGet Package Restore to download them. For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}. - - - - diff --git a/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/Microsoft.WindowsAzure.Storage.QueuesGettingStarted.v140.vcxproj.filters b/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/Microsoft.WindowsAzure.Storage.QueuesGettingStarted.v140.vcxproj.filters deleted file mode 100644 index 50b73c7f..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/Microsoft.WindowsAzure.Storage.QueuesGettingStarted.v140.vcxproj.filters +++ /dev/null @@ -1,33 +0,0 @@ - - - - - {4FC737F1-C7A5-4376-A066-2A32D752A2FF} - cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx - - - {93995380-89BD-4b04-88EB-625FBE52EBFB} - h;hpp;hxx;hm;inl;inc;xsd - - - {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} - rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms - - - - - Header Files - - - - - Source Files - - - Source Files - - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/packages.config b/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/packages.config deleted file mode 100644 index ebd30628..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/packages.config +++ /dev/null @@ -1,5 +0,0 @@ - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/stdafx.cpp b/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/stdafx.cpp deleted file mode 100644 index 4c691102..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/stdafx.cpp +++ /dev/null @@ -1,23 +0,0 @@ -// ----------------------------------------------------------------------------------------- -// -// Copyright 2013 Microsoft Corporation -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. -// -// ----------------------------------------------------------------------------------------- - -// stdafx.cpp : source file that includes just the standard includes -// ConsoleApplication1.pch will be the pre-compiled header -// stdafx.obj will contain the pre-compiled type information - -#include "stdafx.h" - diff --git a/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/stdafx.h b/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/stdafx.h deleted file mode 100644 index e52a6520..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/QueuesGettingStarted/stdafx.h +++ /dev/null @@ -1,28 +0,0 @@ -// ----------------------------------------------------------------------------------------- -// -// Copyright 2013 Microsoft Corporation -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. -// -// ----------------------------------------------------------------------------------------- - -// stdafx.h : include file for standard system include files, -// or project specific include files that are used frequently, but -// are changed infrequently -// - -#pragma once - -#include "targetver.h" - -#define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers - diff --git a/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/CMakeLists.txt b/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/CMakeLists.txt deleted file mode 100644 index 2e08e5f3..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/CMakeLists.txt +++ /dev/null @@ -1,10 +0,0 @@ -include_directories(${AZURESTORAGESAMPLES_INCLUDE_DIRS}) - -# THE ORDER OF FILES IS VERY /VERY/ IMPORTANT -if(UNIX) - set(SOURCES - stdafx.cpp - ) -endif() - -buildsample(${AZURESTORAGESAMPLES_COMMON} ${SOURCES}) \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/Microsoft.WindowsAzure.Storage.SamplesCommon.v120.vcxproj b/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/Microsoft.WindowsAzure.Storage.SamplesCommon.v120.vcxproj deleted file mode 100644 index 6919be41..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/Microsoft.WindowsAzure.Storage.SamplesCommon.v120.vcxproj +++ /dev/null @@ -1,103 +0,0 @@ - - - - - Debug - Win32 - - - Release - Win32 - - - - {6412BFC8-D0F2-4A87-8C36-4EFD77157859} - MicrosoftWindowsAzureStorageSamplesCommon - - - - StaticLibrary - true - v120 - Unicode - - - StaticLibrary - false - v120 - true - Unicode - - - - - - - - - - - - - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - - Use - Level3 - Disabled - WIN32;_DEBUG;_LIB;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\..\includes;%(AdditionalIncludeDirectories) - - - Windows - true - - - - - Level3 - Use - MaxSpeed - true - true - WIN32;NDEBUG;_LIB;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\..\includes;%(AdditionalIncludeDirectories) - - - Windows - true - true - true - - - - - - - - - Create - Create - - - - - {dcff75b0-b142-4ec8-992f-3e48f2e3eece} - false - true - false - true - false - - - - - diff --git a/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/Microsoft.WindowsAzure.Storage.SamplesCommon.v120.vcxproj.filters b/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/Microsoft.WindowsAzure.Storage.SamplesCommon.v120.vcxproj.filters deleted file mode 100644 index a25a2005..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/Microsoft.WindowsAzure.Storage.SamplesCommon.v120.vcxproj.filters +++ /dev/null @@ -1,30 +0,0 @@ - - - - - {4FC737F1-C7A5-4376-A066-2A32D752A2FF} - cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx - - - {93995380-89BD-4b04-88EB-625FBE52EBFB} - h;hpp;hxx;hm;inl;inc;xsd - - - {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} - rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms - - - - - Header Files - - - Header Files - - - - - Source Files - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/Microsoft.WindowsAzure.Storage.SamplesCommon.v140.vcxproj b/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/Microsoft.WindowsAzure.Storage.SamplesCommon.v140.vcxproj deleted file mode 100644 index 6f375974..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/Microsoft.WindowsAzure.Storage.SamplesCommon.v140.vcxproj +++ /dev/null @@ -1,104 +0,0 @@ - - - - - Debug - Win32 - - - Release - Win32 - - - - {2F5FA2E8-7F88-45EC-BDEF-8AF31B1F719C} - MicrosoftWindowsAzureStorageSamplesCommon - - - - StaticLibrary - true - v140 - Unicode - - - StaticLibrary - false - v140 - true - Unicode - - - - - - - - - - - - - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - - Use - Level3 - Disabled - WIN32;_DEBUG;_LIB;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\..\includes;%(AdditionalIncludeDirectories) - - - Windows - true - - - - - Level3 - Use - MaxSpeed - true - true - WIN32;NDEBUG;_LIB;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\..\includes;%(AdditionalIncludeDirectories) - - - Windows - true - true - true - - - - - - - - - Create - Create - - - - - {25D342C3-6CDA-44DD-A16A-32A19B692785} - false - true - false - true - false - - - - - - diff --git a/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/Microsoft.WindowsAzure.Storage.SamplesCommon.v140.vcxproj.filters b/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/Microsoft.WindowsAzure.Storage.SamplesCommon.v140.vcxproj.filters deleted file mode 100644 index a25a2005..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/Microsoft.WindowsAzure.Storage.SamplesCommon.v140.vcxproj.filters +++ /dev/null @@ -1,30 +0,0 @@ - - - - - {4FC737F1-C7A5-4376-A066-2A32D752A2FF} - cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx - - - {93995380-89BD-4b04-88EB-625FBE52EBFB} - h;hpp;hxx;hm;inl;inc;xsd - - - {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} - rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms - - - - - Header Files - - - Header Files - - - - - Source Files - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/samples_common.h b/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/samples_common.h deleted file mode 100644 index 478dc13a..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/samples_common.h +++ /dev/null @@ -1,27 +0,0 @@ -// ----------------------------------------------------------------------------------------- -// -// Copyright 2013 Microsoft Corporation -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. -// -// ----------------------------------------------------------------------------------------- - -#pragma once - -#include "was/common.h" - -namespace azure { namespace storage { namespace samples { - - // TODO: Put your account name and account key here - utility::string_t storage_connection_string(_XPLATSTR("DefaultEndpointsProtocol=https;AccountName=myaccountname;AccountKey=myaccountkey")); - -}}} // namespace azure::storage::samples diff --git a/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/stdafx.cpp b/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/stdafx.cpp deleted file mode 100644 index 558f0da8..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/SamplesCommon/stdafx.cpp +++ /dev/null @@ -1,23 +0,0 @@ -// ----------------------------------------------------------------------------------------- -// -// Copyright 2013 Microsoft Corporation -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. -// -// ----------------------------------------------------------------------------------------- - -// stdafx.cpp : source file that includes just the standard includes -// Microsoft.WindowsAzure.Storage.SamplesCommon.pch will be the pre-compiled header -// stdafx.obj will contain the pre-compiled type information - -#include "stdafx.h" - diff --git a/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/Application.cpp b/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted.cpp similarity index 95% rename from Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/Application.cpp rename to Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted.cpp index c05bc41e..1f21186a 100644 --- a/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/Application.cpp +++ b/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted.cpp @@ -1,5 +1,5 @@ // ----------------------------------------------------------------------------------------- -// +// // Copyright 2013 Microsoft Corporation // // Licensed under the Apache License, Version 2.0 (the "License"); @@ -15,14 +15,15 @@ // // ----------------------------------------------------------------------------------------- -#include "stdafx.h" #include "samples_common.h" -#include "was/storage_account.h" -#include "was/table.h" +#include +#include + namespace azure { namespace storage { namespace samples { + SAMPLE(TablesGettingStarted, tables_getting_started_sample) void tables_getting_started_sample() { try @@ -62,7 +63,7 @@ namespace azure { namespace storage { namespace samples { azure::storage::table_batch_operation batch_operation; for (int i = 0; i < 10; ++i) { - utility::string_t row_key = _XPLATSTR("MyRowKey") + utility::conversions::print_string(i); + utility::string_t row_key = _XPLATSTR("MyRowKey") + utility::conversions::to_string_t(std::to_string(i)); azure::storage::table_entity entity2(_XPLATSTR("MyPartitionKey"), row_key); azure::storage::table_entity::properties_type& properties2 = entity2.properties(); properties2.reserve(3); @@ -117,11 +118,4 @@ namespace azure { namespace storage { namespace samples { } } -}}} // namespace azure::storage::samples - -int main(int argc, const char *argv[]) -{ - azure::storage::samples::tables_getting_started_sample(); - return 0; -} - +}}} // namespace azure::storage::samples \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/CMakeLists.txt b/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/CMakeLists.txt deleted file mode 100644 index 8d2ffc42..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/CMakeLists.txt +++ /dev/null @@ -1,11 +0,0 @@ -include_directories(. ${AZURESTORAGESAMPLES_INCLUDE_DIRS}) - -# THE ORDER OF FILES IS VERY /VERY/ IMPORTANT -if(UNIX) - set(SOURCES - Application.cpp - stdafx.cpp - ) -endif() - -buildsample(${AZURESTORAGESAMPLES_TABLES} ${SOURCES}) \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/Microsoft.WindowsAzure.Storage.TablesGettingStarted.v120.vcxproj b/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/Microsoft.WindowsAzure.Storage.TablesGettingStarted.v120.vcxproj deleted file mode 100644 index c426567d..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/Microsoft.WindowsAzure.Storage.TablesGettingStarted.v120.vcxproj +++ /dev/null @@ -1,122 +0,0 @@ - - - - - Debug - Win32 - - - Release - Win32 - - - - {D52C867C-D624-456F-ADAC-D92A0C975404} - MicrosoftWindowsAzureStorageTablesGettingStarted - - - - Application - true - v120 - Unicode - - - Application - false - v120 - true - Unicode - - - - - - - - - - - - 150363ff - - - true - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - false - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - - Use - Level3 - Disabled - WIN32;_DEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - - - - - - Level3 - Use - MaxSpeed - true - true - WIN32;NDEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - true - true - - - - - - - - - Create - Create - - - - - {dcff75b0-b142-4ec8-992f-3e48f2e3eece} - true - true - false - true - false - - - {6412bfc8-d0f2-4a87-8c36-4efd77157859} - - - - - - - - - - - - This project references NuGet package(s) that are missing on this computer. Enable NuGet Package Restore to download them. For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}. - - - - diff --git a/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/Microsoft.WindowsAzure.Storage.TablesGettingStarted.v120.vcxproj.filters b/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/Microsoft.WindowsAzure.Storage.TablesGettingStarted.v120.vcxproj.filters deleted file mode 100644 index 50b73c7f..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/Microsoft.WindowsAzure.Storage.TablesGettingStarted.v120.vcxproj.filters +++ /dev/null @@ -1,33 +0,0 @@ - - - - - {4FC737F1-C7A5-4376-A066-2A32D752A2FF} - cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx - - - {93995380-89BD-4b04-88EB-625FBE52EBFB} - h;hpp;hxx;hm;inl;inc;xsd - - - {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} - rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms - - - - - Header Files - - - - - Source Files - - - Source Files - - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/Microsoft.WindowsAzure.Storage.TablesGettingStarted.v140.vcxproj b/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/Microsoft.WindowsAzure.Storage.TablesGettingStarted.v140.vcxproj deleted file mode 100644 index 2d3d1cc8..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/Microsoft.WindowsAzure.Storage.TablesGettingStarted.v140.vcxproj +++ /dev/null @@ -1,120 +0,0 @@ - - - - - Debug - Win32 - - - Release - Win32 - - - - {A99AA81C-3952-465C-AD9B-3F0A940DB30D} - MicrosoftWindowsAzureStorageTablesGettingStarted - - - - Application - true - v140 - Unicode - - - Application - false - v140 - true - Unicode - - - - - - - - - - - - - true - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - false - $(ProjectDir)..\..\$(PlatformToolset)\$(Platform)\$(Configuration)\ - $(PlatformToolset)\$(Platform)\$(Configuration)\ - - - - Use - Level3 - Disabled - WIN32;_DEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - - - - - - Level3 - Use - MaxSpeed - true - true - WIN32;NDEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) - true - ..\SamplesCommon;..\..\includes - - - Console - true - true - true - - - - - - - - - Create - Create - - - - - {25D342C3-6CDA-44DD-A16A-32A19B692785} - true - true - false - true - false - - - {2F5FA2E8-7F88-45EC-BDEF-8AF31B1F719C} - - - - - - - - - - - - This project references NuGet package(s) that are missing on this computer. Use NuGet Package Restore to download them. For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}. - - - - diff --git a/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/Microsoft.WindowsAzure.Storage.TablesGettingStarted.v140.vcxproj.filters b/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/Microsoft.WindowsAzure.Storage.TablesGettingStarted.v140.vcxproj.filters deleted file mode 100644 index 50b73c7f..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/Microsoft.WindowsAzure.Storage.TablesGettingStarted.v140.vcxproj.filters +++ /dev/null @@ -1,33 +0,0 @@ - - - - - {4FC737F1-C7A5-4376-A066-2A32D752A2FF} - cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx - - - {93995380-89BD-4b04-88EB-625FBE52EBFB} - h;hpp;hxx;hm;inl;inc;xsd - - - {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} - rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms - - - - - Header Files - - - - - Source Files - - - Source Files - - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/packages.config b/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/packages.config deleted file mode 100644 index ebd30628..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/packages.config +++ /dev/null @@ -1,5 +0,0 @@ - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/stdafx.cpp b/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/stdafx.cpp deleted file mode 100644 index 4c691102..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/stdafx.cpp +++ /dev/null @@ -1,23 +0,0 @@ -// ----------------------------------------------------------------------------------------- -// -// Copyright 2013 Microsoft Corporation -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. -// -// ----------------------------------------------------------------------------------------- - -// stdafx.cpp : source file that includes just the standard includes -// ConsoleApplication1.pch will be the pre-compiled header -// stdafx.obj will contain the pre-compiled type information - -#include "stdafx.h" - diff --git a/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/stdafx.h b/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/stdafx.h deleted file mode 100644 index e52a6520..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/TablesGettingStarted/stdafx.h +++ /dev/null @@ -1,28 +0,0 @@ -// ----------------------------------------------------------------------------------------- -// -// Copyright 2013 Microsoft Corporation -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. -// -// ----------------------------------------------------------------------------------------- - -// stdafx.h : include file for standard system include files, -// or project specific include files that are used frequently, but -// are changed infrequently -// - -#pragma once - -#include "targetver.h" - -#define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers - diff --git a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/stdafx.h b/Microsoft.WindowsAzure.Storage/samples/main.cpp similarity index 52% rename from Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/stdafx.h rename to Microsoft.WindowsAzure.Storage/samples/main.cpp index e52a6520..404b9b0c 100644 --- a/Microsoft.WindowsAzure.Storage/samples/FilesGettingStarted/stdafx.h +++ b/Microsoft.WindowsAzure.Storage/samples/main.cpp @@ -1,6 +1,6 @@ // ----------------------------------------------------------------------------------------- -// -// Copyright 2013 Microsoft Corporation +// +// Copyright 2019 Microsoft Corporation // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. @@ -15,14 +15,34 @@ // // ----------------------------------------------------------------------------------------- -// stdafx.h : include file for standard system include files, -// or project specific include files that are used frequently, but -// are changed infrequently -// - -#pragma once +#include "samples_common.h" -#include "targetver.h" -#define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers +int main(int argc, char** argv) +{ + if (argc != 2) + { + printf("Usage: %s \n", argv[0]); + } + else + { + auto ite = Sample::samples().find(argv[1]); + if (ite == Sample::samples().end()) + { + printf("Cannot find sample %s\n", argv[1]); + } + else + { + auto func = ite->second; + func(); + return 0; + } + } + printf("\nAvailable sample names:\n"); + for (const auto& i : Sample::samples()) + { + printf(" %s\n", i.first.data()); + } + return 1; +} \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/samples/nuget.config b/Microsoft.WindowsAzure.Storage/samples/nuget.config deleted file mode 100644 index fd9f0c8c..00000000 --- a/Microsoft.WindowsAzure.Storage/samples/nuget.config +++ /dev/null @@ -1,6 +0,0 @@ - - - - - - diff --git a/Microsoft.WindowsAzure.Storage/samples/samples_common.h b/Microsoft.WindowsAzure.Storage/samples/samples_common.h new file mode 100644 index 00000000..d1cffdea --- /dev/null +++ b/Microsoft.WindowsAzure.Storage/samples/samples_common.h @@ -0,0 +1,66 @@ +// ----------------------------------------------------------------------------------------- +// +// Copyright 2013 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +// +// ----------------------------------------------------------------------------------------- + +#pragma once + +#include "was/common.h" + + +namespace azure { namespace storage { namespace samples { + + // TODO: Put your account name and account key here + const utility::string_t storage_connection_string(_XPLATSTR("DefaultEndpointsProtocol=https;AccountName=myaccountname;AccountKey=myaccountkey")); + +}}} // namespace azure::storage::samples + + +class Sample +{ +public: + static const std::map>& samples() + { + return m_samples(); + } + +protected: + static void add_sample(std::string sample_name, std::function func) + { + m_samples().emplace(std::move(sample_name), std::move(func)); + } + +private: + static std::map>& m_samples() + { + static std::map> samples_instance; + return samples_instance; + } +}; + +#define SAMPLE(NAME, FUNCTION) \ +void FUNCTION(); \ + \ +class Sample ## NAME : public Sample \ +{ \ +public: \ + Sample ## NAME() \ + { \ + add_sample(#NAME, FUNCTION); \ + } \ +}; \ +namespace { \ + Sample ## NAME Sample ## NAME_; \ +} diff --git a/Microsoft.WindowsAzure.Storage/src/CMakeLists.txt b/Microsoft.WindowsAzure.Storage/src/CMakeLists.txt index b08111fe..d775d23c 100644 --- a/Microsoft.WindowsAzure.Storage/src/CMakeLists.txt +++ b/Microsoft.WindowsAzure.Storage/src/CMakeLists.txt @@ -2,8 +2,11 @@ include_directories(${Boost_INCLUDE_DIR} ${OPENSSL_INCLUDE_DIR}) include_directories(${AZURESTORAGE_INCLUDE_DIRS}) # THE ORDER OF FILES IS VERY /VERY/ IMPORTANT -if(UNIX) +if(UNIX OR WIN32) set(SOURCES + timer_handler.cpp + executor.cpp + xml_wrapper.cpp xmlhelpers.cpp response_parsers.cpp request_result.cpp @@ -57,31 +60,53 @@ if(UNIX) basic_types.cpp authentication.cpp cloud_common.cpp + crc64.cpp ) endif() -if ("${CMAKE_BUILD_TYPE}" MATCHES "Debug") - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fprofile-arcs -ftest-coverage") -endif() if (APPLE) - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${WARNINGS}") + set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${WARNINGS}") else() - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}") + set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}") +endif() + +if(MSVC) + add_compile_options(/Yustdafx.h) + set_source_files_properties(stdafx.cpp PROPERTIES COMPILE_FLAGS "/Ycstdafx.h") + + if (NOT CMAKE_GENERATOR MATCHES "Visual Studio .*") + set_property(SOURCE stdafx.cpp APPEND PROPERTY OBJECT_OUTPUTS "${CMAKE_CURRENT_BINARY_DIR}/stdafx.pch") + set_property(SOURCE ${SOURCES} APPEND PROPERTY OBJECT_DEPENDS "${CMAKE_CURRENT_BINARY_DIR}/stdafx.pch") + endif() + + list(APPEND SOURCES stdafx.cpp) endif() add_library(${AZURESTORAGE_LIBRARY} ${SOURCES}) target_link_libraries(${AZURESTORAGE_LIBRARIES}) +if(WIN32) + target_link_libraries(${AZURESTORAGE_LIBRARY} Ws2_32.lib rpcrt4.lib xmllite.lib bcrypt.lib) +endif() + # Portions specific to azure storage binary versioning and installation. if(UNIX) set_target_properties(${AZURESTORAGE_LIBRARY} PROPERTIES SOVERSION ${AZURESTORAGE_VERSION_MAJOR} VERSION ${AZURESTORAGE_VERSION_MAJOR}.${AZURESTORAGE_VERSION_MINOR}) - install( - TARGETS ${AZURESTORAGE_LIBRARY} - LIBRARY DESTINATION lib - ARCHIVE DESTINATION lib - ) +elseif(WIN32) + set_target_properties(${AZURESTORAGE_LIBRARY} PROPERTIES OUTPUT_NAME "wastorage") endif() + +install(FILES ${WAS_HEADERS} DESTINATION include/was) +install(FILES ${WASCORE_HEADERS} DESTINATION include/wascore) +install(FILES ${WASCORE_DATA} DESTINATION include/wascore) + +install( + TARGETS ${AZURESTORAGE_LIBRARY} + RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR} + LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR} + ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR} +) diff --git a/Microsoft.WindowsAzure.Storage/src/authentication.cpp b/Microsoft.WindowsAzure.Storage/src/authentication.cpp index 713f3edb..ae15d889 100644 --- a/Microsoft.WindowsAzure.Storage/src/authentication.cpp +++ b/Microsoft.WindowsAzure.Storage/src/authentication.cpp @@ -24,13 +24,13 @@ namespace azure { namespace storage { namespace protocol { - utility::string_t calculate_hmac_sha256_hash(const utility::string_t& string_to_hash, const storage_credentials& credentials) + utility::string_t calculate_hmac_sha256_hash(const utility::string_t& string_to_hash, const std::vector& key) { std::string utf8_string_to_hash = utility::conversions::to_utf8string(string_to_hash); - core::hash_provider provider = core::hash_provider::create_hmac_sha256_hash_provider(credentials.account_key()); + core::hash_provider provider = core::hash_provider::create_hmac_sha256_hash_provider(key); provider.write(reinterpret_cast(utf8_string_to_hash.data()), utf8_string_to_hash.size()); provider.close(); - return provider.hash(); + return provider.hash().hmac_sha256(); } void sas_authentication_handler::sign_request(web::http::http_request& request, operation_context context) const @@ -62,12 +62,23 @@ namespace azure { namespace storage { namespace protocol { header_value.append(_XPLATSTR(" ")); header_value.append(m_credentials.account_name()); header_value.append(_XPLATSTR(":")); - header_value.append(calculate_hmac_sha256_hash(string_to_sign, m_credentials)); + header_value.append(calculate_hmac_sha256_hash(string_to_sign, m_credentials.account_key())); headers.add(web::http::header_names::authorization, header_value); } } + void bearer_token_authentication_handler::sign_request(web::http::http_request& request, operation_context context) const + { + web::http::http_headers& headers = request.headers(); + headers.add(ms_header_date, utility::datetime::utc_now().to_string()); + + if (m_credentials.is_bearer_token()) + { + headers.add(web::http::header_names::authorization, _XPLATSTR("Bearer ") + m_credentials.bearer_token()); + } + } + void canonicalizer_helper::append_resource(bool query_only_comp) { m_result.append(_XPLATSTR("/")); @@ -156,14 +167,11 @@ namespace azure { namespace storage { namespace protocol { if ((key_size > ms_header_prefix_size) && std::equal(ms_header_prefix, ms_header_prefix + ms_header_prefix_size, key, [](const utility::char_t &c1, const utility::char_t &c2) {return c1 == c2;})) { - if (!it->second.empty()) - { - utility::string_t transformed_key(key); - std::transform(transformed_key.begin(), transformed_key.end(), transformed_key.begin(), core::utility_char_tolower); - m_result.append(transformed_key); - m_result.append(_XPLATSTR(":")); - append(it->second); - } + utility::string_t transformed_key(key); + std::transform(transformed_key.begin(), transformed_key.end(), transformed_key.begin(), core::utility_char_tolower); + m_result.append(transformed_key); + m_result.append(_XPLATSTR(":")); + append(it->second); } } } diff --git a/Microsoft.WindowsAzure.Storage/src/blob_request_factory.cpp b/Microsoft.WindowsAzure.Storage/src/blob_request_factory.cpp index c31827ea..16340feb 100644 --- a/Microsoft.WindowsAzure.Storage/src/blob_request_factory.cpp +++ b/Microsoft.WindowsAzure.Storage/src/blob_request_factory.cpp @@ -80,7 +80,7 @@ namespace azure { namespace storage { namespace protocol { web::http::http_request set_blob_container_metadata(const cloud_metadata& metadata, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) { uri_builder.append_query(core::make_query_parameter(uri_query_resource_type, resource_container, /* do_encoding */ false)); - return set_blob_metadata(metadata, condition, uri_builder, timeout, context); + return set_blob_metadata(metadata, condition, blob_request_options(), uri_builder, timeout, context); } web::http::http_request get_blob_container_acl(const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) @@ -180,6 +180,12 @@ namespace azure { namespace storage { namespace protocol { include.append(_XPLATSTR(",")); } + if ((includes & blob_listing_details::versions) != 0) + { + include.append(component_versions); + include.append(_XPLATSTR(",")); + } + if (!include.empty()) { include.pop_back(); @@ -260,24 +266,54 @@ namespace azure { namespace storage { namespace protocol { } } - web::http::http_request put_block(const utility::string_t& block_id, const utility::string_t& content_md5, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) + void add_encryption_key(web::http::http_request& request, const std::vector& key) + { + if (key.empty()) + { + return; + } + request.headers().add(ms_header_encryption_key, utility::conversions::to_base64(key)); + auto sha256_hash_provider = core::hash_provider::create_sha256_hash_provider(); + sha256_hash_provider.write(key.data(), key.size()); + sha256_hash_provider.close(); + request.headers().add(ms_header_encryption_key_sha256, sha256_hash_provider.hash().sha256()); + request.headers().add(ms_header_encryption_algorithm, header_value_encryption_algorithm_aes256); + } + + web::http::http_request put_block(const utility::string_t& block_id, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) { uri_builder.append_query(core::make_query_parameter(uri_query_component, component_block, /* do_encoding */ false)); uri_builder.append_query(core::make_query_parameter(uri_query_block_id, block_id)); web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); - request.headers().add(web::http::header_names::content_md5, content_md5); + if (content_checksum.is_md5()) + { + request.headers().add(web::http::header_names::content_md5, content_checksum.md5()); + } + else if (content_checksum.is_crc64()) + { + request.headers().add(ms_header_content_crc64, content_checksum.crc64()); + } add_lease_id(request, condition); + add_encryption_key(request, options.encryption_key()); return request; } - web::http::http_request put_block_list(const cloud_blob_properties& properties, const cloud_metadata& metadata, const utility::string_t& content_md5, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request put_block_list(const cloud_blob_properties& properties, const cloud_metadata& metadata, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) { uri_builder.append_query(core::make_query_parameter(uri_query_component, component_block_list, /* do_encoding */ false)); web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); - request.headers().add(web::http::header_names::content_md5, content_md5); + if (content_checksum.is_md5()) + { + request.headers().add(web::http::header_names::content_md5, content_checksum.md5()); + } + else if (content_checksum.is_crc64()) + { + request.headers().add(ms_header_content_crc64, content_checksum.crc64()); + } add_properties(request, properties); add_metadata(request, metadata); add_access_condition(request, condition); + add_encryption_key(request, options.encryption_key()); return request; } @@ -316,18 +352,25 @@ namespace azure { namespace storage { namespace protocol { return request; } - web::http::http_request get_page_ranges_diff(utility::string_t previous_snapshot_time, utility::size64_t offset, utility::size64_t length, const utility::string_t& snapshot_time, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request get_page_ranges_diff(const utility::string_t& previous_snapshot_time, const utility::string_t& previous_snapshot_url, utility::size64_t offset, utility::size64_t length, const utility::string_t& snapshot_time, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) { add_snapshot_time(uri_builder, snapshot_time); - add_previous_snapshot_time(uri_builder, previous_snapshot_time); + if (!previous_snapshot_time.empty()) + { + add_previous_snapshot_time(uri_builder, previous_snapshot_time); + } uri_builder.append_query(core::make_query_parameter(uri_query_component, component_page_list, /* do_encoding */ false)); web::http::http_request request(base_request(web::http::methods::GET, uri_builder, timeout, context)); + if (!previous_snapshot_url.empty()) + { + request.headers().add(ms_header_previous_snapshot_url, previous_snapshot_url); + } add_range(request, offset, length); add_access_condition(request, condition); return request; } - web::http::http_request put_page(page_range range, page_write write, const utility::string_t& content_md5, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request put_page(page_range range, page_write write, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) { uri_builder.append_query(core::make_query_parameter(uri_query_component, component_page, /* do_encoding */ false)); web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); @@ -339,7 +382,14 @@ namespace azure { namespace storage { namespace protocol { { case page_write::update: headers.add(ms_header_page_write, header_value_page_write_update); - add_optional_header(headers, web::http::header_names::content_md5, content_md5); + if (content_checksum.is_md5()) + { + add_optional_header(headers, web::http::header_names::content_md5, content_checksum.md5()); + } + else if (content_checksum.is_crc64()) + { + add_optional_header(headers, ms_header_content_crc64, content_checksum.crc64()); + } break; case page_write::clear: @@ -349,72 +399,98 @@ namespace azure { namespace storage { namespace protocol { add_sequence_number_condition(request, condition); add_access_condition(request, condition); + add_encryption_key(request, options.encryption_key()); return request; } - web::http::http_request append_block(const utility::string_t& content_md5, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request append_block(const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) { uri_builder.append_query(core::make_query_parameter(uri_query_component, component_append_block, /* do_encoding */ false)); web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); - request.headers().add(web::http::header_names::content_md5, content_md5); + if (content_checksum.is_md5()) + { + request.headers().add(web::http::header_names::content_md5, content_checksum.md5()); + } + else if (content_checksum.is_crc64()) + { + request.headers().add(ms_header_content_crc64, content_checksum.crc64()); + } add_append_condition(request, condition); add_access_condition(request, condition); + add_encryption_key(request, options.encryption_key()); return request; } - web::http::http_request put_block_blob(const cloud_blob_properties& properties, const cloud_metadata& metadata, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request put_block_blob(const checksum& content_checksum, const cloud_blob_properties& properties, const cloud_metadata& metadata, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) { web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); request.headers().add(ms_header_blob_type, header_value_blob_type_block); add_properties(request, properties); add_metadata(request, metadata); add_access_condition(request, condition); + if (content_checksum.is_crc64()) + { + request.headers().add(ms_header_content_crc64, content_checksum.crc64()); + } + add_encryption_key(request, options.encryption_key()); return request; } - web::http::http_request put_page_blob(utility::size64_t size, int64_t sequence_number, const cloud_blob_properties& properties, const cloud_metadata& metadata, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request put_page_blob(utility::size64_t size, const utility::string_t& tier, int64_t sequence_number, const cloud_blob_properties& properties, const cloud_metadata& metadata, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) { web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); web::http::http_headers& headers = request.headers(); headers.add(ms_header_blob_type, header_value_blob_type_page); headers.add(ms_header_blob_content_length, size); headers.add(ms_header_blob_sequence_number, sequence_number); + if (tier != header_value_access_tier_unknown) + { + headers.add(ms_header_access_tier, tier); + } add_properties(request, properties); add_metadata(request, metadata); add_access_condition(request, condition); + add_encryption_key(request, options.encryption_key()); return request; } - web::http::http_request put_append_blob(const cloud_blob_properties& properties, const cloud_metadata& metadata, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request put_append_blob(const cloud_blob_properties& properties, const cloud_metadata& metadata, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) { web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); request.headers().add(ms_header_blob_type, header_value_blob_type_append); add_properties(request, properties); add_metadata(request, metadata); add_access_condition(request, condition); + add_encryption_key(request, options.encryption_key()); return request; } - web::http::http_request get_blob(utility::size64_t offset, utility::size64_t length, bool get_range_content_md5, const utility::string_t& snapshot_time, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request get_blob(utility::size64_t offset, utility::size64_t length, checksum_type needs_checksum, const utility::string_t& snapshot_time, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) { add_snapshot_time(uri_builder, snapshot_time); web::http::http_request request(base_request(web::http::methods::GET, uri_builder, timeout, context)); add_range(request, offset, length); - if ((offset < std::numeric_limits::max()) && get_range_content_md5) + if ((offset < std::numeric_limits::max()) && needs_checksum == checksum_type::md5) { request.headers().add(ms_header_range_get_content_md5, header_value_true); } + else if ((offset < std::numeric_limits::max()) && needs_checksum == checksum_type::crc64) + { + request.headers().add(ms_header_range_get_content_crc64, header_value_true); + } add_access_condition(request, condition); + add_encryption_key(request, options.encryption_key()); return request; } - web::http::http_request get_blob_properties(const utility::string_t& snapshot_time, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request get_blob_properties(const utility::string_t& snapshot_time, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) { add_snapshot_time(uri_builder, snapshot_time); web::http::http_request request(base_request(web::http::methods::HEAD, uri_builder, timeout, context)); add_access_condition(request, condition); + add_encryption_key(request, options.encryption_key()); return request; } @@ -464,21 +540,23 @@ namespace azure { namespace storage { namespace protocol { return request; } - web::http::http_request snapshot_blob(const cloud_metadata& metadata, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request snapshot_blob(const cloud_metadata& metadata, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) { uri_builder.append_query(core::make_query_parameter(uri_query_component, component_snapshot, /* do_encoding */ false)); web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); add_metadata(request, metadata); add_access_condition(request, condition); + add_encryption_key(request, options.encryption_key()); return request; } - web::http::http_request set_blob_metadata(const cloud_metadata& metadata, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request set_blob_metadata(const cloud_metadata& metadata, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) { uri_builder.append_query(core::make_query_parameter(uri_query_component, component_metadata, /* do_encoding */ false)); web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); add_metadata(request, metadata); add_access_condition(request, condition); + add_encryption_key(request, options.encryption_key()); return request; } @@ -505,10 +583,14 @@ namespace azure { namespace storage { namespace protocol { return request; } - web::http::http_request copy_blob(const web::http::uri& source, const access_condition& source_condition, const cloud_metadata& metadata, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request copy_blob(const web::http::uri& source, const utility::string_t& tier, const access_condition& source_condition, const cloud_metadata& metadata, const access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) { web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); request.headers().add(ms_header_copy_source, source.to_string()); + if (tier != header_value_access_tier_unknown) + { + request.headers().add(ms_header_access_tier, tier); + } add_source_access_condition(request, source_condition); add_access_condition(request, condition); add_metadata(request, metadata); @@ -525,6 +607,35 @@ namespace azure { namespace storage { namespace protocol { return request; } + web::http::http_request incremental_copy_blob(const web::http::uri& source, const access_condition& condition, const cloud_metadata& metadata, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) + { + uri_builder.append_query(core::make_query_parameter(uri_query_component, component_incrementalcopy, /* do_encoding */ false)); + web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); + request.headers().add(ms_header_copy_source, source.to_string()); + add_access_condition(request, condition); + add_metadata(request, metadata); + return request; + } + + web::http::http_request set_blob_tier(const utility::string_t& tier, const access_condition& condition, const blob_request_options& options, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) + { + uri_builder.append_query(core::make_query_parameter(uri_query_component, component_tier, /* do_encoding */ false)); + web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); + + request.headers().add(ms_header_access_tier, tier); + add_access_condition(request, condition); + add_encryption_key(request, options.encryption_key()); + return request; + } + + web::http::http_request get_user_delegation_key(web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) + { + uri_builder.append_query(core::make_query_parameter(uri_query_resource_type, resource_service), /* do_encoding */ false); + uri_builder.append_query(core::make_query_parameter(uri_query_component, component_user_delegation_key), /* do encoding */ false); + web::http::http_request request(base_request(web::http::methods::POST, uri_builder, timeout, context)); + return request; + } + void add_lease_id(web::http::http_request& request, const access_condition& condition) { add_optional_header(request.headers(), ms_header_lease_id, condition.lease_id()); diff --git a/Microsoft.WindowsAzure.Storage/src/blob_response_parsers.cpp b/Microsoft.WindowsAzure.Storage/src/blob_response_parsers.cpp index 2c36b2b5..444e51c0 100644 --- a/Microsoft.WindowsAzure.Storage/src/blob_response_parsers.cpp +++ b/Microsoft.WindowsAzure.Storage/src/blob_response_parsers.cpp @@ -16,7 +16,10 @@ // ----------------------------------------------------------------------------------------- #include "stdafx.h" + #include "wascore/protocol.h" +#include "wascore/constants.h" +#include "cpprest/asyncrt_utils.h" namespace azure { namespace storage { namespace protocol { @@ -28,43 +31,87 @@ namespace azure { namespace storage { namespace protocol { properties.m_lease_status = parse_lease_status(response); properties.m_lease_state = parse_lease_state(response); properties.m_lease_duration = parse_lease_duration(response); + properties.m_public_access = parse_public_access_type(response); return properties; } - blob_container_public_access_type blob_response_parsers::parse_public_access_type(const web::http::http_response& response) + blob_type blob_response_parsers::parse_blob_type(const utility::string_t& value) { - auto value = get_header_value(response.headers(), ms_header_blob_public_access); - if (value == resource_blob) + if (value == header_value_blob_type_block) + { + return blob_type::block_blob; + } + else if (value == header_value_blob_type_page) { - return blob_container_public_access_type::blob; + return blob_type::page_blob; } - else if (value == resource_container) + else if (value == header_value_blob_type_append) { - return blob_container_public_access_type::container; + return blob_type::append_blob; } else { - return blob_container_public_access_type::off; + return blob_type::unspecified; } } - blob_type blob_response_parsers::parse_blob_type(const utility::string_t& value) + standard_blob_tier blob_response_parsers::parse_standard_blob_tier(const utility::string_t& value) { - if (value == header_value_blob_type_block) + if (value == header_value_access_tier_hot) { - return blob_type::block_blob; + return standard_blob_tier::hot; } - else if (value == header_value_blob_type_page) + else if (value == header_value_access_tier_cool) { - return blob_type::page_blob; + return standard_blob_tier::cool; } - else if (value == header_value_blob_type_append) + else if (value == header_value_access_tier_archive) { - return blob_type::append_blob; + return standard_blob_tier::archive; } else { - return blob_type::unspecified; + return standard_blob_tier::unknown; + } + } + + premium_blob_tier blob_response_parsers::parse_premium_blob_tier(const utility::string_t& value) + { + if (value == header_value_access_tier_p4) + { + return premium_blob_tier::p4; + } + else if (value == header_value_access_tier_p6) + { + return premium_blob_tier::p6; + } + else if (value == header_value_access_tier_p10) + { + return premium_blob_tier::p10; + } + else if (value == header_value_access_tier_p20) + { + return premium_blob_tier::p20; + } + else if (value == header_value_access_tier_p30) + { + return premium_blob_tier::p30; + } + else if (value == header_value_access_tier_p40) + { + return premium_blob_tier::p40; + } + else if (value == header_value_access_tier_p50) + { + return premium_blob_tier::p50; + } + else if (value == header_value_access_tier_p60) + { + return premium_blob_tier::p60; + } + else + { + return premium_blob_tier::unknown; } } @@ -77,17 +124,33 @@ namespace azure { namespace storage { namespace protocol { { auto slash = value.find(_XPLATSTR('/')); value = value.substr(slash + 1); - return utility::conversions::scan_string(value); + return utility::conversions::details::scan_string(value); } if (headers.match(ms_header_blob_content_length, value)) { - return utility::conversions::scan_string(value); + return utility::conversions::details::scan_string(value); } return headers.content_length(); } + archive_status blob_response_parsers::parse_archive_status(const utility::string_t& value) + { + if (value == header_value_archive_status_to_hot) + { + return archive_status::rehydrate_pending_to_hot; + } + else if (value == header_value_archive_status_to_cool) + { + return archive_status::rehydrate_pending_to_cool; + } + else + { + return archive_status::unknown; + } + } + cloud_blob_properties blob_response_parsers::parse_blob_properties(const web::http::http_response& response) { cloud_blob_properties properties; @@ -100,17 +163,46 @@ namespace azure { namespace storage { namespace protocol { properties.m_size = parse_blob_size(response); auto& headers = response.headers(); - properties.m_page_blob_sequence_number = utility::conversions::scan_string(get_header_value(headers, ms_header_blob_sequence_number)); - properties.m_append_blob_committed_block_count = utility::conversions::scan_string(get_header_value(headers, ms_header_blob_committed_block_count)); + properties.m_page_blob_sequence_number = utility::conversions::details::scan_string(get_header_value(headers, ms_header_blob_sequence_number)); + properties.m_append_blob_committed_block_count = utility::conversions::details::scan_string(get_header_value(headers, ms_header_blob_committed_block_count)); properties.m_cache_control = get_header_value(headers, web::http::header_names::cache_control); properties.m_content_disposition = get_header_value(headers, header_content_disposition); properties.m_content_encoding = get_header_value(headers, web::http::header_names::content_encoding); properties.m_content_language = get_header_value(headers, web::http::header_names::content_language); - properties.m_content_md5 = get_header_value(headers, web::http::header_names::content_md5); properties.m_content_type = get_header_value(headers, web::http::header_names::content_type); properties.m_type = parse_blob_type(get_header_value(headers, ms_header_blob_type)); + // When content_range is not empty, it means the request is Get Blob with range specified, then 'Content-MD5' header should not be used. + properties.m_content_md5 = get_header_value(headers, ms_header_blob_content_md5); + if (properties.m_content_md5.empty() && get_header_value(headers, web::http::header_names::content_range).empty()) + { + properties.m_content_md5 = get_header_value(headers, web::http::header_names::content_md5); + } + + auto change_time_string = get_header_value(headers, ms_header_tier_change_time); + if (!change_time_string.empty()) + { + properties.m_access_tier_change_time = utility::datetime::from_string(change_time_string, utility::datetime::date_format::RFC_1123); + } + + auto tier_string = get_header_value(headers, ms_header_access_tier); + properties.m_standard_blob_tier = parse_standard_blob_tier(tier_string); + properties.m_premium_blob_tier = parse_premium_blob_tier(tier_string); + properties.m_archive_status = parse_archive_status(get_header_value(headers, ms_header_archive_status)); + properties.m_server_encrypted = response_parsers::parse_boolean(get_header_value(headers, ms_header_server_encrypted)); + properties.m_is_incremental_copy = response_parsers::parse_boolean(get_header_value(headers, ms_header_incremental_copy)); + properties.m_access_tier_inferred = response_parsers::parse_boolean(get_header_value(headers, ms_header_access_tier_inferred)); + properties.m_encryption_key_sha256 = get_header_value(headers, ms_header_encryption_key_sha256); + properties.m_version_id = get_header_value(headers, ms_header_version_id); + + return properties; + } + + account_properties blob_response_parsers::parse_account_properties(const web::http::http_response& response) + { + account_properties properties; - properties.m_server_encrypted = (get_header_value(headers, ms_header_server_encrypted) == _XPLATSTR("true")); + properties.m_sku_name = get_header_value(response, protocol::ms_header_sku_name); + properties.m_account_kind = get_header_value(response, protocol::ms_header_account_kind); return properties; } diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_append_blob.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_append_blob.cpp index c48320a7..072e0da4 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_append_blob.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_append_blob.cpp @@ -20,9 +20,11 @@ #include "wascore/protocol_xml.h" #include "wascore/blobstreams.h" +#include "cpprest/asyncrt_utils.h" + namespace azure { namespace storage { - pplx::task cloud_append_blob::create_or_replace_async(const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_append_blob::create_or_replace_async_impl(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, std::shared_ptr timer_handler) { assert_no_snapshot(); blob_request_options modified_options(options); @@ -30,8 +32,8 @@ namespace azure { namespace storage { auto properties = m_properties; - auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::put_append_blob, *properties, metadata(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized(), timer_handler); + command->set_build_request(std::bind(protocol::put_append_blob, *properties, metadata(), condition, modified_options, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { @@ -42,16 +44,26 @@ namespace azure { namespace storage { return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_append_blob::append_block_async(concurrency::streams::istream block_data, const utility::string_t& content_md5, const access_condition& condition, const blob_request_options& options, operation_context context) const + pplx::task cloud_append_blob::append_block_async_impl(concurrency::streams::istream block_data, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_timeout, std::shared_ptr timer_handler) const { assert_no_snapshot(); blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), type()); auto properties = m_properties; - bool needs_md5 = content_md5.empty() && modified_options.use_transactional_md5(); + bool needs_md5 = modified_options.use_transactional_md5() && !content_checksum.is_md5(); + bool needs_crc64 = modified_options.use_transactional_crc64() && !content_checksum.is_crc64(); + checksum_type needs_checksum = checksum_type::none; + if (needs_md5) + { + needs_checksum = checksum_type::md5; + } + else if (needs_crc64) + { + needs_checksum = checksum_type::crc64; + } - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, (modified_options.is_maximum_execution_time_customized() && use_timeout), timer_handler); command->set_authentication_handler(service_client().authentication_handler()); command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context)->int64_t { @@ -60,23 +72,23 @@ namespace azure { namespace storage { auto parsed_properties = protocol::blob_response_parsers::parse_blob_properties(response); properties->update_etag_and_last_modified(parsed_properties); properties->update_append_blob_committed_block_count(parsed_properties); - return utility::conversions::scan_string(protocol::get_header_value(response.headers(), protocol::ms_header_blob_append_offset)); + return utility::conversions::details::scan_string(protocol::get_header_value(response.headers(), protocol::ms_header_blob_append_offset)); }); - return core::istream_descriptor::create(block_data, needs_md5).then([command, context, content_md5, modified_options, condition] (core::istream_descriptor request_body) -> pplx::task + return core::istream_descriptor::create(block_data, needs_checksum, std::numeric_limits::max(), protocol::max_append_block_size, command->get_cancellation_token()).then([command, context, content_checksum, modified_options, condition, cancellation_token, options](core::istream_descriptor request_body) -> pplx::task { - const utility::string_t& md5 = content_md5.empty() ? request_body.content_md5() : content_md5; - command->set_build_request(std::bind(protocol::append_block, md5, condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + const auto& checksum = content_checksum.empty() ? request_body.content_checksum() : content_checksum; + command->set_build_request(std::bind(protocol::append_block, checksum, condition, modified_options, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_request_body(request_body); return core::executor::execute_async(command, modified_options, context); }); } - pplx::task cloud_append_blob::download_text_async(const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_append_blob::download_text_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { auto properties = m_properties; concurrency::streams::container_buffer> buffer; - return download_to_stream_async(buffer.create_ostream(), condition, options, context).then([buffer, properties] () mutable -> utility::string_t + return download_to_stream_async(buffer.create_ostream(), condition, options, context, cancellation_token).then([buffer, properties] () mutable -> utility::string_t { if (properties->content_type() != protocol::header_value_content_type_utf8) { @@ -88,7 +100,7 @@ namespace azure { namespace storage { }); } - pplx::task cloud_append_blob::open_write_async(bool create_new, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_append_blob::open_write_async_impl(bool create_new, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_request_level_timeout, std::shared_ptr timer_handler) { assert_no_snapshot(); blob_request_options modified_options(options); @@ -97,7 +109,7 @@ namespace azure { namespace storage { pplx::task create_task; if (create_new) { - create_task = create_or_replace_async(condition, modified_options, context); + create_task = create_or_replace_async_impl(condition, modified_options, context, cancellation_token, timer_handler); } else { @@ -110,7 +122,7 @@ namespace azure { namespace storage { } auto instance = std::make_shared(*this); - return create_task.then([instance, condition, modified_options, context]() + return create_task.then([instance, condition, modified_options, context, cancellation_token, use_request_level_timeout, timer_handler]() { auto modified_condition = access_condition::generate_lease_condition(condition.lease_id()); if (condition.max_size() != -1) @@ -123,16 +135,23 @@ namespace azure { namespace storage { modified_condition.set_append_position(condition.append_position()); } - return core::cloud_append_blob_ostreambuf(instance, modified_condition, modified_options, context).create_ostream(); + return core::cloud_append_blob_ostreambuf(instance, modified_condition, modified_options, context, cancellation_token, use_request_level_timeout, timer_handler).create_ostream(); }); } - pplx::task cloud_append_blob::upload_from_stream_async(concurrency::streams::istream source, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_append_blob::upload_from_stream_async(concurrency::streams::istream source, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { - return upload_from_stream_internal_async(source, length, true, condition, options, context); + + if (options.is_maximum_execution_time_customized()) + { + std::shared_ptr timer_handler = std::make_shared(cancellation_token); + timer_handler->start_timer(options.maximum_execution_time()); // azure::storage::core::timer_handler will automatically stop the timer when destructed. + return upload_from_stream_internal_async(source, length, true, condition, options, context, timer_handler->get_cancellation_token(), timer_handler).then([timer_handler/*timer_handler MUST be captured*/]() {}); + } + return upload_from_stream_internal_async(source, length, true, condition, options, context, cancellation_token); } - pplx::task cloud_append_blob::upload_from_stream_internal_async(concurrency::streams::istream source, utility::size64_t length, bool create_new, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_append_blob::upload_from_stream_internal_async(concurrency::streams::istream source, utility::size64_t length, bool create_new, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, std::shared_ptr timer_handler) { assert_no_snapshot(); blob_request_options modified_options(options); @@ -146,7 +165,7 @@ namespace azure { namespace storage { { length = remaining_stream_length; } - + // If the stream is seekable, check for the case where the stream is too short. // If the stream is not seekable, this will be caught later, when we run out of bytes in the stream when uploading. if (source.can_seek() && (length > remaining_stream_length)) @@ -154,21 +173,45 @@ namespace azure { namespace storage { throw std::invalid_argument(protocol::error_stream_short); } - return open_write_async(create_new, condition, modified_options, context).then([source, length](concurrency::streams::ostream blob_stream) -> pplx::task + return open_write_async_impl(create_new, condition, modified_options, context, cancellation_token, false, timer_handler).then([source, length, cancellation_token, timer_handler](concurrency::streams::ostream blob_stream) -> pplx::task { - return core::stream_copy_async(source, blob_stream, length).then([blob_stream](utility::size64_t) -> pplx::task + return core::stream_copy_async(source, blob_stream, length, std::numeric_limits::max(), cancellation_token, timer_handler).then([blob_stream](pplx::task copy_task) -> pplx::task { - return blob_stream.close(); + return blob_stream.close().then([copy_task](pplx::task close_task) + { + try + { + copy_task.wait(); + } + catch (const std::exception&) + { + try + { + close_task.wait(); + } + catch (...) + { + } + throw; + } + close_task.wait(); + }); }); }); } - pplx::task cloud_append_blob::upload_from_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_append_blob::upload_from_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { auto instance = std::make_shared(*this); - return concurrency::streams::file_stream::open_istream(path).then([instance, condition, options, context](concurrency::streams::istream stream) -> pplx::task + return concurrency::streams::file_stream::open_istream(path).then([instance, condition, options, context, cancellation_token](concurrency::streams::istream stream) -> pplx::task { - return instance->upload_from_stream_async(stream, condition, options, context).then([stream](pplx::task upload_task) -> pplx::task + utility::size64_t remaining_stream_length = core::get_remaining_stream_length(stream); + if (remaining_stream_length == std::numeric_limits::max()) + { + throw storage_exception(protocol::error_stream_length_unknown); + } + + return instance->upload_from_stream_async(stream, std::numeric_limits::max(), condition, options, context, cancellation_token).then([stream](pplx::task upload_task) -> pplx::task { return stream.close().then([upload_task]() { @@ -178,26 +221,32 @@ namespace azure { namespace storage { }); } - pplx::task cloud_append_blob::upload_text_async(const utility::string_t& content, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_append_blob::upload_text_async(const utility::string_t& content, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { auto utf8_body = utility::conversions::to_utf8string(content); auto length = utf8_body.size(); auto stream = concurrency::streams::bytestream::open_istream(std::move(utf8_body)); m_properties->set_content_type(protocol::header_value_content_type_utf8); - return upload_from_stream_async(stream, length, condition, options, context); + return upload_from_stream_async(stream, length, condition, options, context, cancellation_token); } - pplx::task cloud_append_blob::append_from_stream_async(concurrency::streams::istream source, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_append_blob::append_from_stream_async(concurrency::streams::istream source, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { - return upload_from_stream_internal_async(source, length, false, condition, options, context); + if (options.is_maximum_execution_time_customized()) + { + std::shared_ptr timer_handler = std::make_shared(cancellation_token); + timer_handler->start_timer(options.maximum_execution_time()); // azure::storage::core::timer_handler will automatically stop the timer when destructed. + return upload_from_stream_internal_async(source, length, false, condition, options, context, timer_handler->get_cancellation_token()).then([timer_handler/*timer_handler MUST be captured*/]() {}); + } + return upload_from_stream_internal_async(source, length, false, condition, options, context, cancellation_token); } - pplx::task cloud_append_blob::append_from_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_append_blob::append_from_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { auto instance = std::make_shared(*this); - return concurrency::streams::file_stream::open_istream(path).then([instance, condition, options, context](concurrency::streams::istream stream) -> pplx::task + return concurrency::streams::file_stream::open_istream(path).then([instance, condition, options, context, cancellation_token](concurrency::streams::istream stream) -> pplx::task { - return instance->append_from_stream_async(stream, condition, options, context).then([stream](pplx::task upload_task) -> pplx::task + return instance->append_from_stream_async(stream, std::numeric_limits::max(), condition, options, context, cancellation_token).then([stream](pplx::task upload_task) -> pplx::task { return stream.close().then([upload_task]() { @@ -207,12 +256,12 @@ namespace azure { namespace storage { }); } - pplx::task cloud_append_blob::append_text_async(const utility::string_t& content, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_append_blob::append_text_async(const utility::string_t& content, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { auto utf8_body = utility::conversions::to_utf8string(content); auto length = utf8_body.size(); auto stream = concurrency::streams::bytestream::open_istream(std::move(utf8_body)); m_properties->set_content_type(protocol::header_value_content_type_utf8); - return append_from_stream_async(stream, length, condition, options, context); + return append_from_stream_async(stream, length, condition, options, context, cancellation_token); } }} // namespace azure::storage diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_blob.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_blob.cpp index 71aadb7b..5798e8f8 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_blob.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_blob.cpp @@ -108,6 +108,39 @@ namespace azure { namespace storage { } } + utility::string_t cloud_blob::get_premium_access_tier_string(const premium_blob_tier tier) + { + switch (tier) + { + case premium_blob_tier::p4: + return protocol::header_value_access_tier_p4; + + case premium_blob_tier::p6: + return protocol::header_value_access_tier_p6; + + case premium_blob_tier::p10: + return protocol::header_value_access_tier_p10; + + case premium_blob_tier::p20: + return protocol::header_value_access_tier_p20; + + case premium_blob_tier::p30: + return protocol::header_value_access_tier_p30; + + case premium_blob_tier::p40: + return protocol::header_value_access_tier_p40; + + case premium_blob_tier::p50: + return protocol::header_value_access_tier_p50; + + case premium_blob_tier::p60: + return protocol::header_value_access_tier_p60; + + default: + return protocol::header_value_access_tier_unknown; + } + } + utility::string_t cloud_blob::get_shared_access_signature(const blob_shared_access_policy& policy, const utility::string_t& stored_policy_identifier, const cloud_blob_shared_access_headers& headers) const { if (!service_client().credentials().is_shared_key()) @@ -127,24 +160,34 @@ namespace azure { namespace storage { resource_str.append(_XPLATSTR("/")); resource_str.append(name()); - return protocol::get_blob_sas_token(stored_policy_identifier, policy, headers, _XPLATSTR("b"), resource_str, service_client().credentials()); + return protocol::get_blob_sas_token(stored_policy_identifier, policy, headers, is_snapshot() ? _XPLATSTR("bs") : _XPLATSTR("b"), resource_str, snapshot_time(), service_client().credentials()); + } + + utility::string_t cloud_blob::get_user_delegation_sas(const user_delegation_key& key, const blob_shared_access_policy& policy, const cloud_blob_shared_access_headers& headers) const + { + utility::string_t resource_str = + _XPLATSTR("/") + utility::string_t(protocol::service_blob) + + _XPLATSTR("/") + service_client().credentials().account_name() + + _XPLATSTR("/") + container().name() + + _XPLATSTR("/") + name(); + return protocol::get_blob_user_delegation_sas_token(policy, headers, is_snapshot() ? _XPLATSTR("bs") : _XPLATSTR("b"), resource_str, snapshot_time(), key); } - pplx::task cloud_blob::open_read_async(const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_blob::open_read_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), type(), false); auto instance = std::make_shared(*this); - return instance->download_attributes_async(condition, modified_options, context).then([instance, condition, modified_options, context] () -> concurrency::streams::istream + return instance->download_attributes_async_impl(condition, modified_options, context, cancellation_token, true).then([instance, condition, modified_options, context, cancellation_token] () -> concurrency::streams::istream { auto modified_condition = azure::storage::access_condition::generate_if_match_condition(instance->properties().etag()); modified_condition.set_lease_id(condition.lease_id()); - return core::cloud_blob_istreambuf(instance, modified_condition, modified_options, context).create_istream(); + return core::cloud_blob_istreambuf(instance, modified_condition, modified_options, context, cancellation_token, true).create_istream(); }); } - pplx::task cloud_blob::download_attributes_async(const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_blob::download_attributes_async_impl(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_timer, std::shared_ptr timer_handler) { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), type()); @@ -153,21 +196,22 @@ namespace azure { namespace storage { auto metadata = m_metadata; auto copy_state = m_copy_state; - auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::get_blob_properties, snapshot_time(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized() && use_timer, timer_handler); + command->set_build_request(std::bind(protocol::get_blob_properties, snapshot_time(), condition, modified_options, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_location_mode(core::command_location_mode::primary_or_secondary); - command->set_preprocess_response([properties, metadata, copy_state] (const web::http::http_response& response, const request_result& result, operation_context context) + command->set_preprocess_response([properties, metadata, copy_state](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); - properties->update_all(protocol::blob_response_parsers::parse_blob_properties(response), false); + properties->update_all(protocol::blob_response_parsers::parse_blob_properties(response)); *metadata = protocol::parse_metadata(response); *copy_state = protocol::response_parsers::parse_copy_state(response); }); + return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob::upload_metadata_async(const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_blob::upload_metadata_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { assert_no_snapshot(); blob_request_options modified_options(options); @@ -175,18 +219,19 @@ namespace azure { namespace storage { auto properties = m_properties; - auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::set_blob_metadata, metadata(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); + command->set_build_request(std::bind(protocol::set_blob_metadata, metadata(), condition, modified_options, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_properties(response)); }); + return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob::upload_properties_async(const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_blob::upload_properties_async_impl(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_timeout, std::shared_ptr timer_handler) { assert_no_snapshot(); blob_request_options modified_options(options); @@ -194,26 +239,35 @@ namespace azure { namespace storage { auto properties = m_properties; - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, (modified_options.is_maximum_execution_time_customized() && use_timeout), timer_handler); command->set_build_request(std::bind(protocol::set_blob_properties, *properties, metadata(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); - + auto parsed_properties = protocol::blob_response_parsers::parse_blob_properties(response); properties->update_etag_and_last_modified(parsed_properties); properties->update_page_blob_sequence_number(parsed_properties); }); + return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob::delete_blob_async(delete_snapshots_option snapshots_option, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_blob::download_account_properties_async(const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const + { + blob_request_options modified_options(options); + modified_options.apply_defaults(service_client().default_request_options(), blob_type::unspecified); + + return service_client().download_account_properties_base_async(uri(), modified_options, context, cancellation_token); + } + + pplx::task cloud_blob::delete_blob_async(delete_snapshots_option snapshots_option, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), blob_type::unspecified); - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::delete_blob, snapshots_option, snapshot_time(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); @@ -223,20 +277,32 @@ namespace azure { namespace storage { protocol::preprocess_response_void(response, result, context); properties->initialization(); }); + return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob::delete_blob_if_exists_async(delete_snapshots_option snapshots_option, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_blob::delete_blob_if_exists_async(delete_snapshots_option snapshots_option, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), blob_type::unspecified); + std::chrono::steady_clock::time_point first_time_point; + + if (options.is_maximum_execution_time_customized()) + { + first_time_point = std::chrono::steady_clock::now(); + } auto instance = std::make_shared(*this); - return exists_async(true, modified_options, context).then([instance, snapshots_option, condition, modified_options, context] (bool exists_result) -> pplx::task + return exists_async_impl(true, modified_options, context, cancellation_token).then([instance, condition, modified_options, snapshots_option, context, cancellation_token, options, first_time_point](bool exists_result) mutable ->pplx::task { if (exists_result) { - return instance->delete_blob_async(snapshots_option, condition, modified_options, context).then([] (pplx::task delete_task) -> bool + if (modified_options.is_maximum_execution_time_customized()) + { + auto new_max_execution_time = modified_options.maximum_execution_time() - std::chrono::duration_cast(std::chrono::steady_clock::now() - first_time_point); + modified_options.set_maximum_execution_time(new_max_execution_time); + } + return instance->delete_blob_async(snapshots_option, condition, modified_options, context, cancellation_token).then([](pplx::task delete_task) -> bool { try { @@ -266,7 +332,7 @@ namespace azure { namespace storage { }); } - pplx::task cloud_blob::acquire_lease_async(const lease_time& duration, const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context) const + pplx::task cloud_blob::acquire_lease_async(const lease_time& duration, const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { assert_no_snapshot(); blob_request_options modified_options(options); @@ -274,19 +340,20 @@ namespace azure { namespace storage { auto properties = m_properties; - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::lease_blob, protocol::header_value_lease_acquire, proposed_lease_id, duration, lease_break_period(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) -> utility::string_t + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) -> utility::string_t { protocol::preprocess_response_void(response, result, context); properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_properties(response)); return protocol::parse_lease_id(response); }); + return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob::renew_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context) const + pplx::task cloud_blob::renew_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { if (condition.lease_id().empty()) { @@ -299,18 +366,19 @@ namespace azure { namespace storage { auto properties = m_properties; - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::lease_blob, protocol::header_value_lease_renew, utility::string_t(), lease_time(), lease_break_period(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_properties(response)); }); + return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob::change_lease_async(const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context) const + pplx::task cloud_blob::change_lease_async(const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { if (condition.lease_id().empty()) { @@ -323,19 +391,20 @@ namespace azure { namespace storage { auto properties = m_properties; - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::lease_blob, protocol::header_value_lease_change, proposed_lease_id, lease_time(), lease_break_period(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) -> utility::string_t + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) -> utility::string_t { protocol::preprocess_response_void(response, result, context); properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_properties(response)); return protocol::parse_lease_id(response); }); + return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob::release_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context) const + pplx::task cloud_blob::release_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { if (condition.lease_id().empty()) { @@ -348,18 +417,19 @@ namespace azure { namespace storage { auto properties = m_properties; - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::lease_blob, protocol::header_value_lease_release, utility::string_t(), lease_time(), lease_break_period(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_properties(response)); }); + return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob::break_lease_async(const lease_break_period& break_period, const access_condition& condition, const blob_request_options& options, operation_context context) const + pplx::task cloud_blob::break_lease_async(const lease_break_period& break_period, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { assert_no_snapshot(); blob_request_options modified_options(options); @@ -367,15 +437,16 @@ namespace azure { namespace storage { auto properties = m_properties; - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::lease_blob, protocol::header_value_lease_break, utility::string_t(), lease_time(), break_period, condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) -> std::chrono::seconds + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) -> std::chrono::seconds { protocol::preprocess_response_void(response, result, context); properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_properties(response)); return protocol::parse_lease_time(response); }); + return core::executor::execute_async(command, modified_options, context); } @@ -385,12 +456,13 @@ namespace azure { namespace storage { utility::size64_t m_total_written_to_destination_stream; utility::size64_t m_response_length; utility::string_t m_response_md5; + utility::string_t m_response_crc64; utility::string_t m_locked_etag; bool m_reset_target; concurrency::streams::ostream::pos_type m_target_offset; }; - pplx::task cloud_blob::download_single_range_to_stream_async(concurrency::streams::ostream target, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, bool update_properties) + pplx::task cloud_blob::download_single_range_to_stream_async(concurrency::streams::ostream target, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, bool update_properties, const pplx::cancellation_token& cancellation_token, std::shared_ptr timer_handler) { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), blob_type::unspecified); @@ -407,7 +479,7 @@ namespace azure { namespace storage { download_info->m_reset_target = false; download_info->m_target_offset = target.can_seek() ? target.tell() : static_cast::pos_type>(0); - std::shared_ptr> command = std::make_shared>(uri()); + std::shared_ptr> command = std::make_shared>(uri(), cancellation_token, false, timer_handler); std::weak_ptr> weak_command(command); command->set_build_request([offset, length, modified_options, condition, current_snapshot_time, download_info](web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) -> web::http::http_request { @@ -449,12 +521,31 @@ namespace azure { namespace storage { current_condition = condition; } - return protocol::get_blob(current_offset, current_length, modified_options.use_transactional_md5() && !download_info->m_are_properties_populated, current_snapshot_time, current_condition, uri_builder, timeout, context); + checksum_type needs_checksum = checksum_type::none; + if (modified_options.use_transactional_md5() && !download_info->m_are_properties_populated) + { + needs_checksum = checksum_type::md5; + } + else if (modified_options.use_transactional_crc64()) + { + needs_checksum = checksum_type::crc64; + } + + return protocol::get_blob(current_offset, current_length, needs_checksum, current_snapshot_time, current_condition, modified_options, uri_builder, timeout, context); }); command->set_authentication_handler(service_client().authentication_handler()); command->set_location_mode(core::command_location_mode::primary_or_secondary); command->set_destination_stream(target); - command->set_calculate_response_body_md5(!modified_options.disable_content_md5_validation()); + checksum_type calculate_checksum_type = modified_options.disable_content_md5_validation() ? checksum_type::none : checksum_type::md5; + if (modified_options.use_transactional_md5()) + { + calculate_checksum_type = modified_options.disable_content_md5_validation() ? checksum_type::none : checksum_type::md5; + } + else if (modified_options.use_transactional_crc64()) + { + calculate_checksum_type = modified_options.disable_content_crc64_validation() ? checksum_type::none : checksum_type::crc64; + } + command->set_calculate_response_body_checksum(calculate_checksum_type); command->set_recover_request([target, download_info](utility::size64_t total_written_to_destination_stream, operation_context context) -> bool { if (download_info->m_reset_target) @@ -478,7 +569,7 @@ namespace azure { namespace storage { download_info->m_total_written_to_destination_stream = total_written_to_destination_stream; } - return true; + return target.is_open(); }); command->set_preprocess_response([weak_command, offset, modified_options, properties, metadata, copy_state, download_info, update_properties](const web::http::http_response& response, const request_result& result, operation_context context) { @@ -505,17 +596,22 @@ namespace azure { namespace storage { { if (update_properties == true) { - properties->update_all(protocol::blob_response_parsers::parse_blob_properties(response), offset != std::numeric_limits::max()); + properties->update_all(protocol::blob_response_parsers::parse_blob_properties(response)); *metadata = protocol::parse_metadata(response); *copy_state = protocol::response_parsers::parse_copy_state(response); } download_info->m_response_length = result.content_length(); download_info->m_response_md5 = result.content_md5(); + download_info->m_response_crc64 = result.content_crc64(); if (modified_options.use_transactional_md5() && !modified_options.disable_content_md5_validation() && download_info->m_response_md5.empty()) { - throw storage_exception(protocol::error_missing_md5); + throw storage_exception(protocol::error_missing_md5, false); + } + if (!modified_options.use_transactional_md5() && modified_options.use_transactional_crc64() && !modified_options.disable_content_crc64_validation() && download_info->m_response_crc64.empty()) + { + throw storage_exception(protocol::error_missing_crc64, false); } // Lock to the current storage location when resuming a failed download. This is locked @@ -536,33 +632,64 @@ namespace azure { namespace storage { command->set_location_mode(core::command_location_mode::primary_or_secondary); - if (!download_info->m_response_md5.empty() && !descriptor.content_md5().empty() && download_info->m_response_md5 != descriptor.content_md5()) + if (!download_info->m_response_md5.empty() && descriptor.content_checksum().is_md5() && download_info->m_response_md5 != descriptor.content_checksum().md5()) { throw storage_exception(protocol::error_md5_mismatch); } + if (!download_info->m_response_crc64.empty() && !descriptor.content_checksum().is_crc64() && download_info->m_response_crc64 != descriptor.content_checksum().crc64()) + { + throw storage_exception(protocol::error_crc64_mismatch); + } return pplx::task_from_result(); }); return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob::download_range_to_stream_async(concurrency::streams::ostream target, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_blob::download_range_to_stream_async(concurrency::streams::ostream target, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { - if (options.parallelism_factor() > 1) + std::shared_ptr timer_handler = std::make_shared(cancellation_token); + + if (options.is_maximum_execution_time_customized()) + { + timer_handler->start_timer(options.maximum_execution_time());// azure::storage::core::timer_handler will automatically stop the timer when destructed. + } + + if (options.parallelism_factor() > 1 || options.use_transactional_crc64()) { auto instance = std::make_shared(*this); // if download a whole blob, enable download strategy(download 32MB first). utility::size64_t single_blob_download_threshold(protocol::default_single_blob_download_threshold); - // If tranactional md5 validation is set, first range should be 4MB. - if (options.use_transactional_md5()) + // If transactional md5 validation is set, first range should be 4MB. + if (options.use_transactional_md5() || options.use_transactional_crc64()) { single_blob_download_threshold = protocol::default_single_block_download_threshold; } + // We use this variable to track if we are trying to download the whole blob or just a range. + // In the former case, no exception should be thrown if the blob is empty. + // In the latter case, exception should be thrown. + // And the behavior should be consistent no matter we're downloading with multiple or single thread. + bool no_throw_on_empty = false; + + if (offset >= std::numeric_limits::max()) + { + no_throw_on_empty = true; + if (length == 0) + { + offset = 0; + length = std::numeric_limits::max(); + } + else + { + throw std::invalid_argument("length"); + } + } + // download first range. // if 416 thrown, it's an empty blob. need to download attributes. // otherwise, properties must be updated for further parallel download. - return instance->download_single_range_to_stream_async(target, 0, single_blob_download_threshold, condition, options, context, true).then([=](pplx::task download_task) + return instance->download_single_range_to_stream_async(target, offset, length < single_blob_download_threshold ? length : single_blob_download_threshold, condition, options, context, true, timer_handler->get_cancellation_token(), timer_handler).then([=](pplx::task download_task) { try { @@ -570,11 +697,11 @@ namespace azure { namespace storage { } catch (storage_exception &e) { - // For empty blob, swallow the exception and update the attributes. - if (e.result().http_status_code() == web::http::status_codes::RangeNotSatisfiable - && offset >= std::numeric_limits::max()) + // If offset equals to 0 and HTTP status code is 416 RangeNotSatisfiable, then this is an empty blob. + // For empty blob, update the attributes or throw an exception. + if (e.result().http_status_code() == web::http::status_codes::RangeNotSatisfiable && offset == 0 && no_throw_on_empty) { - return instance->download_attributes_async(condition, options, context); + return instance->download_attributes_async_impl(condition, options, context, timer_handler->get_cancellation_token(), false, timer_handler); } else { @@ -582,26 +709,22 @@ namespace azure { namespace storage { } } - if ((offset >= std::numeric_limits::max() && instance->properties().size() <= single_blob_download_threshold) - || (offset < std::numeric_limits::max() && length <= single_blob_download_threshold)) - { - return pplx::task_from_result(); - } - // download the rest data in parallel. - utility::size64_t target_offset; - utility::size64_t target_length; - - if (offset >= std::numeric_limits::max()) + utility::size64_t target_offset = offset; + utility::size64_t target_length = length; + if (target_length >= std::numeric_limits::max() + || target_length > instance->properties().size() - offset) { - target_offset = single_blob_download_threshold; - target_length = instance->properties().size() - single_blob_download_threshold; + target_length = instance->properties().size() - offset; } - else + + // Download completes in first range download. + if (target_length <= single_blob_download_threshold) { - target_offset = offset + single_blob_download_threshold; - target_length = length - single_blob_download_threshold; + return pplx::task_from_result(); } + target_offset += single_blob_download_threshold; + target_length -= single_blob_download_threshold; access_condition modified_condition(condition); if (condition.if_match_etag().empty()) @@ -609,7 +732,7 @@ namespace azure { namespace storage { modified_condition.set_if_match_etag(instance->properties().etag()); } - return pplx::task_from_result().then([instance, target, target_offset, target_length, single_blob_download_threshold, modified_condition, options, context]() + return pplx::task_from_result().then([instance, offset, target, target_offset, target_length, single_blob_download_threshold, modified_condition, options, context, timer_handler]() { auto semaphore = std::make_shared(options.parallelism_factor()); // lock to the target ostream @@ -620,50 +743,67 @@ namespace azure { namespace storage { auto smallest_offset = std::make_shared(target_offset); auto condition_variable = std::make_shared(); - std::mutex condition_variable_mutex; - for (utility::size64_t current_offset = target_offset; current_offset < target_offset + target_length; current_offset += protocol::single_block_size) + std::mutex condition_variable_mutex; + std::vector> parallel_tasks; + for (utility::size64_t current_offset = target_offset; current_offset < target_offset + target_length; current_offset += protocol::transactional_md5_block_size) { - utility::size64_t current_length = protocol::single_block_size; + utility::size64_t current_length = protocol::transactional_md5_block_size; if (current_offset + current_length > target_offset + target_length) { current_length = target_offset + target_length - current_offset; } - semaphore->lock_async().then([instance, &mutex, semaphore, condition_variable, &condition_variable_mutex, &writer, target, smallest_offset, current_offset, current_length, modified_condition, options, context]() + auto parallel_task = semaphore->lock_async().then([instance, &mutex, semaphore, condition_variable, &condition_variable_mutex, &writer, offset, target, smallest_offset, current_offset, current_length, modified_condition, options, context, timer_handler]() { + auto sem_unlocker = std::make_shared>(*semaphore, std::adopt_lock); + concurrency::streams::container_buffer> buffer; auto segment_ostream = buffer.create_ostream(); - // if trasaction MD5 is enabled, it will be checked inside each download_single_range_to_stream_async. - instance->download_single_range_to_stream_async(segment_ostream, current_offset, current_length, modified_condition, options, context).then([buffer, segment_ostream, semaphore, condition_variable, &condition_variable_mutex, smallest_offset, current_offset, current_length, &mutex, target, &writer, options](pplx::task download_task) + // if transaction MD5 is enabled, it will be checked inside each download_single_range_to_stream_async. + return instance->download_single_range_to_stream_async(segment_ostream, current_offset, current_length, modified_condition, options, context, false, timer_handler->get_cancellation_token(), timer_handler) + .then([buffer, segment_ostream, semaphore, sem_unlocker, condition_variable, &condition_variable_mutex, smallest_offset, offset, current_offset, current_length, &mutex, target, &writer, options](pplx::task download_task) { segment_ostream.close().then([download_task](pplx::task close_task) { - download_task.wait(); + try + { + download_task.wait(); + } + catch (const std::exception&) + { + try + { + close_task.wait(); + } + catch (...) + { + } + throw; + } close_task.wait(); }).wait(); - // status of current semaphore. - bool released = false; // target stream is seekable, could write to target stream once the download finished. if (target.can_seek()) { pplx::extensibility::scoped_rw_lock_t guard(mutex); - target.streambuf().seekpos(current_offset, std::ios_base::out); + target.streambuf().seekpos(current_offset - offset, std::ios_base::out); target.streambuf().putn_nocopy(buffer.collection().data(), buffer.collection().size()).wait(); - *smallest_offset += protocol::single_block_size; - released = true; - semaphore->unlock(); + *smallest_offset += protocol::transactional_md5_block_size; } else { + // status of current semaphore. + bool released = false; { pplx::extensibility::scoped_rw_lock_t guard(mutex); if (*smallest_offset == current_offset) { + // Below is the IO operation that may block for a relatively long time. However, this operation does not provide a interface to interrupt, so no cancellation support. target.streambuf().putn_nocopy(buffer.collection().data(), buffer.collection().size()).wait(); - *smallest_offset += protocol::single_block_size; + *smallest_offset += protocol::transactional_md5_block_size; condition_variable->notify_all(); released = true; - semaphore->unlock(); + sem_unlocker->unlock(); } } if (!released) @@ -672,7 +812,7 @@ namespace azure { namespace storage { if (writer < options.parallelism_factor()) { released = true; - semaphore->unlock(); + sem_unlocker->unlock(); } std::unique_lock locker(condition_variable_mutex); condition_variable->wait(locker, [smallest_offset, current_offset, &mutex]() @@ -686,7 +826,7 @@ namespace azure { namespace storage { if (*smallest_offset == current_offset) { target.streambuf().putn_nocopy(buffer.collection().data(), buffer.collection().size()).wait(); - *smallest_offset += protocol::single_block_size; + *smallest_offset += protocol::transactional_md5_block_size; } else if (*smallest_offset > current_offset) { @@ -695,16 +835,51 @@ namespace azure { namespace storage { } condition_variable->notify_all(); pplx::details::atomic_decrement(writer); - if (!released) - { - semaphore->unlock(); - } } } }); }); + parallel_tasks.emplace_back(std::move(parallel_task)); + } + // If the cancellation token is canceled, the lock will be in lock status when the exception is thrown, so need to unlock it in case it blocks other async processes + try + { + semaphore->wait_all_async().get(); + } + catch (const storage_exception& ex) + { + if (std::string(ex.what()) == protocol::error_operation_canceled) + { + semaphore->unlock(); + } + throw ex; } - semaphore->wait_all_async().wait(); + + pplx::when_all(parallel_tasks.begin(), parallel_tasks.end()).then([parallel_tasks](pplx::task wait_all_task) + { + try + { + wait_all_task.wait(); + } + catch (const std::exception&) + { + std::for_each(parallel_tasks.begin(), parallel_tasks.end(), [](pplx::task task) + { + task.then([](pplx::task t) + { + try + { + t.wait(); + } + catch (...) + { + } + }); + }); + throw; + } + }).wait(); + std::unique_lock locker(condition_variable_mutex); condition_variable->wait(locker, [smallest_offset, &mutex, target_offset, target_length]() { @@ -712,20 +887,20 @@ namespace azure { namespace storage { return *smallest_offset >= target_offset + target_length; }); }); - }); + }).then([timer_handler/*timer_handler MUST be captured*/]() {}); } else { - return download_single_range_to_stream_async(target, offset, length, condition, options, context, true); + return download_single_range_to_stream_async(target, offset, length, condition, options, context, true, timer_handler->get_cancellation_token(), timer_handler).then([timer_handler/*timer_handler MUST be captured*/]() {});; } } - pplx::task cloud_blob::download_to_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_blob::download_to_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { auto instance = std::make_shared(*this); - return concurrency::streams::file_stream::open_ostream(path).then([instance, condition, options, context] (concurrency::streams::ostream stream) -> pplx::task + return concurrency::streams::file_stream::open_ostream(path).then([instance, condition, options, context, cancellation_token] (concurrency::streams::ostream stream) -> pplx::task { - return instance->download_to_stream_async(stream, condition, options, context).then([stream] (pplx::task upload_task) -> pplx::task + return instance->download_to_stream_async(stream, condition, options, context, cancellation_token).then([stream] (pplx::task upload_task) -> pplx::task { return stream.close().then([upload_task]() { @@ -735,7 +910,7 @@ namespace azure { namespace storage { }); } - pplx::task cloud_blob::exists_async(bool primary_only, const blob_request_options& options, operation_context context) + pplx::task cloud_blob::exists_async_impl(bool primary_only, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), blob_type::unspecified); @@ -744,11 +919,11 @@ namespace azure { namespace storage { auto metadata = m_metadata; auto copy_state = m_copy_state; - auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::get_blob_properties, snapshot_time(), access_condition(), std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); + command->set_build_request(std::bind(protocol::get_blob_properties, snapshot_time(), access_condition(), modified_options, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_location_mode(primary_only ? core::command_location_mode::primary_only : core::command_location_mode::primary_or_secondary); - command->set_preprocess_response([properties, metadata, copy_state] (const web::http::http_response& response, const request_result& result, operation_context context) -> bool + command->set_preprocess_response([properties, metadata, copy_state](const web::http::http_response& response, const request_result& result, operation_context context) -> bool { if (response.status_code() == web::http::status_codes::NotFound) { @@ -756,7 +931,7 @@ namespace azure { namespace storage { } protocol::preprocess_response_void(response, result, context); - properties->update_all(protocol::blob_response_parsers::parse_blob_properties(response), false); + properties->update_all(protocol::blob_response_parsers::parse_blob_properties(response)); *metadata = protocol::parse_metadata(response); *copy_state = protocol::response_parsers::parse_copy_state(response); return true; @@ -764,7 +939,7 @@ namespace azure { namespace storage { return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob::start_copy_async(const web::http::uri& source, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context) + pplx::task cloud_blob::start_copy_async_impl(const web::http::uri& source, const premium_blob_tier tier, const cloud_metadata& metadata, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { assert_no_snapshot(); blob_request_options modified_options(options); @@ -773,26 +948,27 @@ namespace azure { namespace storage { auto properties = m_properties; auto copy_state = m_copy_state; - auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::copy_blob, source, source_condition, metadata(), destination_condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); + command->set_build_request(std::bind(protocol::copy_blob, source, get_premium_access_tier_string(tier), source_condition, metadata, destination_condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties, copy_state] (const web::http::http_response& response, const request_result& result, operation_context context) -> utility::string_t + command->set_preprocess_response([properties, copy_state, tier](const web::http::http_response& response, const request_result& result, operation_context context) -> utility::string_t { protocol::preprocess_response_void(response, result, context); properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_properties(response)); auto new_state = protocol::response_parsers::parse_copy_state(response); + properties->m_premium_blob_tier = tier; *copy_state = new_state; return new_state.copy_id(); }); return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob::start_copy_async(const cloud_blob& source, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context) + pplx::task cloud_blob::start_copy_async(const cloud_blob& source, const cloud_metadata& metadata, const access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { web::http::uri raw_source_uri = source.snapshot_qualified_uri().primary_uri(); web::http::uri source_uri = source.service_client().credentials().transform_uri(raw_source_uri); - return start_copy_async(source_uri, source_condition, destination_condition, options, context); + return start_copy_async(source_uri, metadata, source_condition, destination_condition, options, context, cancellation_token); } pplx::task cloud_blob::start_copy_async(const cloud_file& source) @@ -800,29 +976,29 @@ namespace azure { namespace storage { return start_copy_async(source, file_access_condition(), access_condition(), blob_request_options(), operation_context()); } - pplx::task cloud_blob::start_copy_async(const cloud_file& source, const file_access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context) + pplx::task cloud_blob::start_copy_async(const cloud_file& source, const cloud_metadata& metadata, const file_access_condition& source_condition, const access_condition& destination_condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { UNREFERENCED_PARAMETER(source_condition); web::http::uri raw_source_uri = source.uri().primary_uri(); web::http::uri source_uri = source.service_client().credentials().transform_uri(raw_source_uri); - return start_copy_async(source_uri, access_condition(), destination_condition, options, context); + return start_copy_async(source_uri, metadata, access_condition(), destination_condition, options, context, cancellation_token); } - pplx::task cloud_blob::abort_copy_async(const utility::string_t& copy_id, const access_condition& condition, const blob_request_options& options, operation_context context) const + pplx::task cloud_blob::abort_copy_async(const utility::string_t& copy_id, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { assert_no_snapshot(); blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), blob_type::unspecified); - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::abort_copy_blob, copy_id, condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_preprocess_response(std::bind(protocol::preprocess_response_void, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob::create_snapshot_async(cloud_metadata metadata, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_blob::create_snapshot_async(cloud_metadata metadata, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { assert_no_snapshot(); blob_request_options modified_options(options); @@ -834,10 +1010,10 @@ namespace azure { namespace storage { auto snapshot_metadata = std::make_shared(std::move(metadata)); auto resulting_metadata = snapshot_metadata->empty() ? m_metadata : snapshot_metadata; - auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::snapshot_blob, *snapshot_metadata, condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); + command->set_build_request(std::bind(protocol::snapshot_blob, *snapshot_metadata, condition, modified_options, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([snapshot_name, snapshot_container, resulting_metadata, properties] (const web::http::http_response& response, const request_result& result, operation_context context) -> cloud_blob + command->set_preprocess_response([snapshot_name, snapshot_container, resulting_metadata, properties](const web::http::http_response& response, const request_result& result, operation_context context) -> cloud_blob { protocol::preprocess_response_void(response, result, context); auto snapshot_time = protocol::get_header_value(response, protocol::ms_header_snapshot); @@ -845,6 +1021,7 @@ namespace azure { namespace storage { *snapshot.m_metadata = *resulting_metadata; snapshot.m_properties->copy_from_root(*properties); snapshot.m_properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_properties(response)); + properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_properties(response)); return snapshot; }); return core::executor::execute_async(command, modified_options, context); diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_blob_client.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_blob_client.cpp index 8f3d90b8..e2d2a46d 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_blob_client.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_blob_client.cpp @@ -33,14 +33,14 @@ namespace azure { namespace storage { max_results, 0); } - pplx::task cloud_blob_client::list_containers_segmented_async(const utility::string_t& prefix, container_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context) const + pplx::task cloud_blob_client::list_containers_segmented_async(const utility::string_t& prefix, container_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { blob_request_options modified_options(options); modified_options.apply_defaults(default_request_options(), blob_type::unspecified); auto client = *this; - auto command = std::make_shared>(base_uri()); + auto command = std::make_shared>(base_uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::list_containers, prefix, includes, max_results, token, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(authentication_handler()); command->set_location_mode(core::command_location_mode::primary_or_secondary, token.target_location()); @@ -75,7 +75,7 @@ namespace azure { namespace storage { return container.list_blobs(actual_prefix, use_flat_blob_listing, includes, max_results, options, context); } - pplx::task cloud_blob_client::list_blobs_segmented_async(const utility::string_t& prefix, bool use_flat_blob_listing, blob_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context) const + pplx::task cloud_blob_client::list_blobs_segmented_async(const utility::string_t& prefix, bool use_flat_blob_listing, blob_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { blob_request_options modified_options(options); modified_options.apply_defaults(default_request_options(), blob_type::unspecified); @@ -85,31 +85,39 @@ namespace azure { namespace storage { parse_blob_name_prefix(prefix, container_name, actual_prefix); auto container = container_name.empty() ? get_root_container_reference() : get_container_reference(container_name); - return container.list_blobs_segmented_async(actual_prefix, use_flat_blob_listing, includes, max_results, token, modified_options, context); + return container.list_blobs_segmented_async(actual_prefix, use_flat_blob_listing, includes, max_results, token, modified_options, context, cancellation_token); } - pplx::task cloud_blob_client::download_service_properties_async(const blob_request_options& options, operation_context context) const + pplx::task cloud_blob_client::download_service_properties_async(const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { blob_request_options modified_options(options); modified_options.apply_defaults(default_request_options(), blob_type::unspecified); - return download_service_properties_base_async(modified_options, context); + return download_service_properties_base_async(modified_options, context, cancellation_token); } - pplx::task cloud_blob_client::upload_service_properties_async(const service_properties& properties, const service_properties_includes& includes, const blob_request_options& options, operation_context context) const + pplx::task cloud_blob_client::upload_service_properties_async(const service_properties& properties, const service_properties_includes& includes, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { blob_request_options modified_options(options); modified_options.apply_defaults(default_request_options(), blob_type::unspecified); - return upload_service_properties_base_async(properties, includes, modified_options, context); + return upload_service_properties_base_async(properties, includes, modified_options, context, cancellation_token); } - pplx::task cloud_blob_client::download_service_stats_async(const blob_request_options& options, operation_context context) const + pplx::task cloud_blob_client::download_service_stats_async(const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { blob_request_options modified_options(options); modified_options.apply_defaults(default_request_options(), blob_type::unspecified); - return download_service_stats_base_async(modified_options, context); + return download_service_stats_base_async(modified_options, context, cancellation_token); + } + + pplx::task cloud_blob_client::download_account_properties_async(const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const + { + blob_request_options modified_options(options); + modified_options.apply_defaults(default_request_options(), blob_type::unspecified); + + return download_account_properties_base_async(base_uri(), modified_options, context, cancellation_token); } cloud_blob_container cloud_blob_client::get_root_container_reference() const @@ -122,6 +130,20 @@ namespace azure { namespace storage { return cloud_blob_container(std::move(container_name), *this); } + pplx::task cloud_blob_client::download_account_properties_base_async(const storage_uri& uri, const request_options& modified_options, operation_context context, const pplx::cancellation_token& cancellation_token) const + { + auto command = std::make_shared>(uri, cancellation_token, modified_options.is_maximum_execution_time_customized()); + command->set_build_request(std::bind(protocol::get_account_properties, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_authentication_handler(authentication_handler()); + command->set_location_mode(core::command_location_mode::primary_or_secondary); + command->set_preprocess_response(std::bind(protocol::preprocess_response, account_properties(), std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_postprocess_response([](const web::http::http_response& response, const request_result&, const core::ostream_descriptor&, operation_context context) -> pplx::task + { + return pplx::task_from_result(protocol::blob_response_parsers::parse_account_properties(response)); + }); + return core::executor::execute_async(command, modified_options, context); + } + void cloud_blob_client::set_authentication_scheme(azure::storage::authentication_scheme value) { cloud_client::set_authentication_scheme(value); @@ -148,6 +170,10 @@ namespace azure { namespace storage { { set_authentication_handler(std::make_shared(std::move(creds))); } + else if (creds.is_bearer_token()) + { + set_authentication_handler(std::make_shared(std::move(creds))); + } else { set_authentication_handler(std::make_shared()); @@ -169,4 +195,31 @@ namespace azure { namespace storage { } } + pplx::task cloud_blob_client::get_user_delegation_key_async(const utility::datetime& start, const utility::datetime& expiry, const request_options& modified_options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + if (!credentials().is_bearer_token()) + { + throw std::logic_error(protocol::error_uds_missing_credentials); + } + + protocol::user_delegation_key_time_writer writer; + concurrency::streams::istream stream(concurrency::streams::bytestream::open_istream(writer.write(start, expiry))); + + auto command = std::make_shared>(base_uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); + command->set_build_request(std::bind(protocol::get_user_delegation_key, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_authentication_handler(authentication_handler()); + command->set_location_mode(core::command_location_mode::primary_or_secondary); + command->set_preprocess_response(std::bind(protocol::preprocess_response, user_delegation_key(), std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_postprocess_response([](const web::http::http_response& response, const request_result&, const core::ostream_descriptor&, operation_context context) -> pplx::task + { + protocol::user_delegation_key_reader reader(response.body()); + return pplx::task_from_result(reader.move_key()); + }); + return core::istream_descriptor::create(stream, checksum_type::none, std::numeric_limits::max(), std::numeric_limits::max(), command->get_cancellation_token()).then([command, context, modified_options, cancellation_token](core::istream_descriptor request_body) -> pplx::task + { + command->set_request_body(request_body); + return core::executor::execute_async(command, modified_options, context); + }); + } + }} // namespace azure::storage diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_blob_container.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_blob_container.cpp index 2bdc13fd..5850a941 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_blob_container.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_blob_container.cpp @@ -80,7 +80,16 @@ namespace azure { namespace storage { resource_str.append(name()); // Future resource type changes from "c" => "container" - return protocol::get_blob_sas_token(stored_policy_identifier, policy, cloud_blob_shared_access_headers(), _XPLATSTR("c"), resource_str, service_client().credentials()); + return protocol::get_blob_sas_token(stored_policy_identifier, policy, cloud_blob_shared_access_headers(), _XPLATSTR("c"), resource_str, utility::string_t(), service_client().credentials()); + } + + utility::string_t cloud_blob_container::get_user_delegation_sas(const user_delegation_key& key, const blob_shared_access_policy& policy) const + { + utility::string_t resource_str = + _XPLATSTR("/") + utility::string_t(protocol::service_blob) + + _XPLATSTR("/") + service_client().credentials().account_name() + + _XPLATSTR("/") + name(); + return protocol::get_blob_user_delegation_sas_token(policy, cloud_blob_shared_access_headers(), _XPLATSTR("c"), resource_str, utility::string_t(), key); } cloud_blob cloud_blob_container::get_blob_reference(utility::string_t blob_name) const @@ -128,7 +137,7 @@ namespace azure { namespace storage { return cloud_blob_directory(std::move(directory_name), *this); } - pplx::task cloud_blob_container::download_attributes_async(const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_blob_container::download_attributes_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), blob_type::unspecified); @@ -136,7 +145,7 @@ namespace azure { namespace storage { auto properties = m_properties; auto metadata = m_metadata; - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::get_blob_container_properties, condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_location_mode(core::command_location_mode::primary_or_secondary); @@ -146,47 +155,58 @@ namespace azure { namespace storage { *properties = protocol::blob_response_parsers::parse_blob_container_properties(response); *metadata = protocol::parse_metadata(response); }); + return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob_container::upload_metadata_async(const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_blob_container::upload_metadata_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), blob_type::unspecified); auto properties = m_properties; - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::set_blob_container_metadata, metadata(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_container_properties(response)); }); + return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob_container::acquire_lease_async(const lease_time& duration, const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context) const + pplx::task cloud_blob_container::download_account_properties_async(const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const + { + blob_request_options modified_options(options); + modified_options.apply_defaults(service_client().default_request_options(), blob_type::unspecified); + + return service_client().download_account_properties_base_async(uri(), modified_options, context, cancellation_token); + } + + pplx::task cloud_blob_container::acquire_lease_async(const lease_time& duration, const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), blob_type::unspecified); auto properties = m_properties; - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::lease_blob_container, protocol::header_value_lease_acquire, proposed_lease_id, duration, lease_break_period(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) -> utility::string_t + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) -> utility::string_t { protocol::preprocess_response_void(response, result, context); properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_container_properties(response)); return protocol::parse_lease_id(response); }); + return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob_container::renew_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context) const + pplx::task cloud_blob_container::renew_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { if (condition.lease_id().empty()) { @@ -198,18 +218,19 @@ namespace azure { namespace storage { auto properties = m_properties; - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::lease_blob_container, protocol::header_value_lease_renew, utility::string_t(), lease_time(), lease_break_period(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_container_properties(response)); }); + return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob_container::change_lease_async(const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context) const + pplx::task cloud_blob_container::change_lease_async(const utility::string_t& proposed_lease_id, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { if (condition.lease_id().empty()) { @@ -221,19 +242,20 @@ namespace azure { namespace storage { auto properties = m_properties; - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::lease_blob_container, protocol::header_value_lease_change, proposed_lease_id, lease_time(), lease_break_period(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) -> utility::string_t + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) -> utility::string_t { protocol::preprocess_response_void(response, result, context); properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_container_properties(response)); return protocol::parse_lease_id(response); }); + return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob_container::release_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context) const + pplx::task cloud_blob_container::release_lease_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { if (condition.lease_id().empty()) { @@ -245,65 +267,80 @@ namespace azure { namespace storage { auto properties = m_properties; - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::lease_blob_container, protocol::header_value_lease_release, utility::string_t(), lease_time(), lease_break_period(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_container_properties(response)); }); + return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob_container::break_lease_async(const lease_break_period& break_period, const access_condition& condition, const blob_request_options& options, operation_context context) const + pplx::task cloud_blob_container::break_lease_async(const lease_break_period& break_period, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), blob_type::unspecified); auto properties = m_properties; - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::lease_blob_container, protocol::header_value_lease_break, utility::string_t(), lease_time(), break_period, condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) -> std::chrono::seconds + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) -> std::chrono::seconds { protocol::preprocess_response_void(response, result, context); properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_container_properties(response)); return protocol::parse_lease_time(response); }); + return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob_container::create_async(blob_container_public_access_type public_access, const blob_request_options& options, operation_context context) + pplx::task cloud_blob_container::create_async(blob_container_public_access_type public_access, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), blob_type::unspecified); auto properties = m_properties; - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::create_blob_container, public_access, metadata(), std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) + command->set_preprocess_response([properties, public_access](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); + properties->m_public_access = public_access; properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_container_properties(response)); }); + return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob_container::create_if_not_exists_async(blob_container_public_access_type public_access, const blob_request_options& options, operation_context context) + pplx::task cloud_blob_container::create_if_not_exists_async(blob_container_public_access_type public_access, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), blob_type::unspecified); + std::chrono::steady_clock::time_point first_time_point; + + if (modified_options.is_maximum_execution_time_customized()) + { + first_time_point = std::chrono::steady_clock::now(); + } auto instance = std::make_shared(*this); - return exists_async(true, modified_options, context).then([instance, public_access, modified_options, context] (bool exists_result) -> pplx::task + return exists_async_impl(true, modified_options, context, cancellation_token).then([instance, public_access, modified_options, context, first_time_point, cancellation_token, options] (bool exists_result) mutable -> pplx::task { if (!exists_result) { - return instance->create_async(public_access, modified_options, context).then([] (pplx::task create_task) -> bool + if (modified_options.is_maximum_execution_time_customized()) + { + auto new_max_execution_time = modified_options.maximum_execution_time() - std::chrono::duration_cast(std::chrono::steady_clock::now() - first_time_point); + modified_options.set_maximum_execution_time(new_max_execution_time); + } + return instance->create_async(public_access, modified_options, context, cancellation_token).then([] (pplx::task create_task) -> bool { try { @@ -333,12 +370,12 @@ namespace azure { namespace storage { }); } - pplx::task cloud_blob_container::delete_container_async(const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_blob_container::delete_container_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), blob_type::unspecified); - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::delete_blob_container, condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); @@ -348,20 +385,32 @@ namespace azure { namespace storage { protocol::preprocess_response_void(response, result, context); properties->initialization(); }); + return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob_container::delete_container_if_exists_async(const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_blob_container::delete_container_if_exists_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), blob_type::unspecified); + std::chrono::steady_clock::time_point first_time_point; + + if (options.is_maximum_execution_time_customized()) + { + first_time_point = std::chrono::steady_clock::now(); + } auto instance = std::make_shared(*this); - return exists_async(true, modified_options, context).then([instance, condition, modified_options, context] (bool exists_result) -> pplx::task + return exists_async_impl(true, modified_options, context, cancellation_token).then([instance, condition, modified_options, context, first_time_point, cancellation_token, options](bool exists_result) mutable -> pplx::task { if (exists_result) { - return instance->delete_container_async(condition, modified_options, context).then([] (pplx::task delete_task) -> bool + if (modified_options.is_maximum_execution_time_customized()) + { + auto new_max_execution_time = modified_options.maximum_execution_time() - std::chrono::duration_cast(std::chrono::steady_clock::now() - first_time_point); + modified_options.set_maximum_execution_time(new_max_execution_time); + } + return instance->delete_container_async(condition, modified_options, context, cancellation_token).then([](pplx::task delete_task) -> bool { try { @@ -386,7 +435,7 @@ namespace azure { namespace storage { } else { - return pplx::task_from_result(false); + return pplx::task_from_result(false); } }); } @@ -402,7 +451,7 @@ namespace azure { namespace storage { max_results, 0); } - pplx::task cloud_blob_container::list_blobs_segmented_async(const utility::string_t& prefix, bool use_flat_blob_listing, blob_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context) const + pplx::task cloud_blob_container::list_blobs_segmented_async(const utility::string_t& prefix, bool use_flat_blob_listing, blob_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), blob_type::unspecified); @@ -420,12 +469,12 @@ namespace azure { namespace storage { delimiter = service_client().directory_delimiter(); } - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::list_blobs, prefix, delimiter, includes, max_results, token, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_location_mode(core::command_location_mode::primary_or_secondary, token.target_location()); command->set_preprocess_response(std::bind(protocol::preprocess_response, list_blob_item_segment(), std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); - command->set_postprocess_response([container, delimiter] (const web::http::http_response& response, const request_result& result, const core::ostream_descriptor&, operation_context context) -> pplx::task + command->set_postprocess_response([container, delimiter, includes] (const web::http::http_response& response, const request_result& result, const core::ostream_descriptor&, operation_context context) -> pplx::task { protocol::list_blobs_reader reader(response.body()); @@ -437,7 +486,9 @@ namespace azure { namespace storage { for (auto iter = blob_items.begin(); iter != blob_items.end(); ++iter) { - list_blob_items.push_back(list_blob_item(iter->move_name(), iter->move_snapshot_time(), container, iter->move_properties(), iter->move_metadata(), iter->move_copy_state())); + auto properties = iter->move_properties(); + utility::string_t version_id = (includes & blob_listing_details::values::versions) ? properties.version_id() : utility::string_t(); + list_blob_items.push_back(list_blob_item(iter->move_name(), iter->move_snapshot_time(), std::move(version_id), iter->is_current_version(), container, std::move(properties), iter->move_metadata(), iter->move_copy_state())); } for (auto iter = blob_prefix_items.begin(); iter != blob_prefix_items.end(); ++iter) @@ -450,10 +501,11 @@ namespace azure { namespace storage { return pplx::task_from_result(list_blob_item_segment(std::move(list_blob_items), std::move(next_token))); }); + return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob_container::upload_permissions_async(const blob_container_permissions& permissions, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_blob_container::upload_permissions_async(const blob_container_permissions& permissions, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), blob_type::unspecified); @@ -463,50 +515,56 @@ namespace azure { namespace storage { auto properties = m_properties; - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::set_blob_container_acl, permissions.public_access(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_container_properties(response)); }); - return core::istream_descriptor::create(stream).then([command, context, modified_options] (core::istream_descriptor request_body) -> pplx::task + + return core::istream_descriptor::create(stream, checksum_type::none, std::numeric_limits::max(), std::numeric_limits::max(), command->get_cancellation_token()).then([command, context, modified_options, cancellation_token, options](core::istream_descriptor request_body) -> pplx::task { command->set_request_body(request_body); return core::executor::execute_async(command, modified_options, context); }); } - pplx::task cloud_blob_container::download_permissions_async(const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_blob_container::download_permissions_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), blob_type::unspecified); auto properties = m_properties; - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::get_blob_container_acl, condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_location_mode(core::command_location_mode::primary_or_secondary); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) -> blob_container_permissions + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) -> blob_container_permissions { protocol::preprocess_response_void(response, result, context); properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_container_properties(response)); return blob_container_permissions(); }); - command->set_postprocess_response([] (const web::http::http_response& response, const request_result&, const core::ostream_descriptor&, operation_context context) -> pplx::task + command->set_postprocess_response([properties](const web::http::http_response& response, const request_result&, const core::ostream_descriptor&, operation_context context) -> pplx::task { blob_container_permissions permissions; protocol::access_policy_reader reader(response.body()); permissions.set_policies(reader.move_policies()); - permissions.set_public_access(protocol::blob_response_parsers::parse_public_access_type(response)); + + auto public_access_type = protocol::parse_public_access_type(response); + permissions.set_public_access(public_access_type); + properties->m_public_access = public_access_type; + return pplx::task_from_result(permissions); }); + return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_blob_container::exists_async(bool primary_only, const blob_request_options& options, operation_context context) + pplx::task cloud_blob_container::exists_async_impl(bool primary_only, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), blob_type::unspecified); @@ -514,11 +572,11 @@ namespace azure { namespace storage { auto properties = m_properties; auto metadata = m_metadata; - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::get_blob_container_properties, access_condition(), std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_location_mode(primary_only ? core::command_location_mode::primary_only : core::command_location_mode::primary_or_secondary); - command->set_preprocess_response([properties, metadata] (const web::http::http_response& response, const request_result& result, operation_context context) -> bool + command->set_preprocess_response([properties, metadata](const web::http::http_response& response, const request_result& result, operation_context context) -> bool { if (response.status_code() == web::http::status_codes::NotFound) { diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_blob_directory.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_blob_directory.cpp index 8127ca91..aaf4ccb8 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_blob_directory.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_blob_directory.cpp @@ -97,9 +97,9 @@ namespace azure { namespace storage { return m_container.list_blobs(m_name, use_flat_blob_listing, includes, max_results, options, context); } - pplx::task cloud_blob_directory::list_blobs_segmented_async(bool use_flat_blob_listing, blob_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context) const + pplx::task cloud_blob_directory::list_blobs_segmented_async(bool use_flat_blob_listing, blob_listing_details::values includes, int max_results, const continuation_token& token, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { - return m_container.list_blobs_segmented_async(m_name, use_flat_blob_listing, includes, max_results, token, options, context); + return m_container.list_blobs_segmented_async(m_name, use_flat_blob_listing, includes, max_results, token, options, context, cancellation_token); } }} // namespace azure::storage diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_blob_istreambuf.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_blob_istreambuf.cpp index b219e8e0..055b77da 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_blob_istreambuf.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_blob_istreambuf.cpp @@ -140,7 +140,7 @@ namespace azure { namespace storage { namespace core { temp_buffer.seekpos(0, std::ios_base::out); auto this_pointer = std::dynamic_pointer_cast(shared_from_this()); - return m_blob->download_range_to_stream_async(temp_buffer.create_ostream(), m_current_blob_offset, read_size, m_condition, m_options, m_context).then([this_pointer, temp_buffer] (pplx::task download_task) -> pplx::task + return m_blob->download_range_to_stream_async(temp_buffer.create_ostream(), m_current_blob_offset, read_size, m_condition, m_options, m_context, m_cancellation_token).then([this_pointer, temp_buffer] (pplx::task download_task) -> pplx::task { try { @@ -148,7 +148,7 @@ namespace azure { namespace storage { namespace core { this_pointer->m_buffer = concurrency::streams::container_buffer>(std::move(temp_buffer.collection()), std::ios_base::in); this_pointer->m_buffer.seekpos(0, std::ios_base::in); - // Validate the blob's content MD5 hash + // Validate the blob's content checksum if (this_pointer->m_blob_hash_provider.is_enabled()) { std::vector& result_buffer = this_pointer->m_buffer.collection(); @@ -157,7 +157,8 @@ namespace azure { namespace storage { namespace core { if (((utility::size64_t) this_pointer->m_next_blob_offset) == this_pointer->size()) { this_pointer->m_blob_hash_provider.close(); - if (this_pointer->m_blob->properties().content_md5() != this_pointer->m_blob_hash_provider.hash()) + checksum checksum = this_pointer->m_blob_hash_provider.hash(); + if (checksum.is_md5() && this_pointer->m_blob->properties().content_md5() != this_pointer->m_blob_hash_provider.hash().md5()) { throw storage_exception(protocol::error_md5_mismatch); } diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_blob_ostreambuf.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_blob_ostreambuf.cpp index 0efb6da6..b053a433 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_blob_ostreambuf.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_blob_ostreambuf.cpp @@ -58,7 +58,7 @@ namespace azure { namespace storage { namespace core { { try { - this_pointer->m_blob->upload_block_async(block_id, buffer->stream(), buffer->content_md5(), this_pointer->m_condition, this_pointer->m_options, this_pointer->m_context).then([this_pointer] (pplx::task upload_task) + this_pointer->m_blob->upload_block_async_impl(block_id, buffer->stream(), buffer->content_checksum(), this_pointer->m_condition, this_pointer->m_options, this_pointer->m_context, this_pointer->m_cancellation_token, this_pointer->m_use_request_level_timeout, this_pointer->m_timer_handler).then([this_pointer] (pplx::task upload_task) { std::lock_guard guard(this_pointer->m_semaphore, std::adopt_lock); try @@ -71,9 +71,10 @@ namespace azure { namespace storage { namespace core { } }); } - catch (...) + catch (const std::exception&) { this_pointer->m_semaphore.unlock(); + this_pointer->m_currentException = std::current_exception(); } } else @@ -90,10 +91,10 @@ namespace azure { namespace storage { namespace core { { if (this_pointer->m_total_hash_provider.is_enabled()) { - this_pointer->m_blob->properties().set_content_md5(this_pointer->m_total_hash_provider.hash()); + this_pointer->m_blob->properties().set_content_md5(this_pointer->m_total_hash_provider.hash().md5()); } - return this_pointer->m_blob->upload_block_list_async(this_pointer->m_block_list, this_pointer->m_condition, this_pointer->m_options, this_pointer->m_context); + return this_pointer->m_blob->upload_block_list_async_impl(this_pointer->m_block_list, this_pointer->m_condition, this_pointer->m_options, this_pointer->m_context, this_pointer->m_cancellation_token, this_pointer->m_use_request_level_timeout,this_pointer->m_timer_handler); }); } @@ -126,7 +127,7 @@ namespace azure { namespace storage { namespace core { { try { - this_pointer->m_blob->upload_pages_async(buffer->stream(), offset, buffer->content_md5(), this_pointer->m_condition, this_pointer->m_options, this_pointer->m_context).then([this_pointer] (pplx::task upload_task) + this_pointer->m_blob->upload_pages_async_impl(buffer->stream(), offset, buffer->content_checksum(), this_pointer->m_condition, this_pointer->m_options, this_pointer->m_context, this_pointer->m_cancellation_token, this_pointer->m_use_request_level_timeout, this_pointer->m_timer_handler).then([this_pointer] (pplx::task upload_task) { std::lock_guard guard(this_pointer->m_semaphore, std::adopt_lock); try @@ -158,8 +159,8 @@ namespace azure { namespace storage { namespace core { auto this_pointer = std::dynamic_pointer_cast(shared_from_this()); return _sync().then([this_pointer] (bool) -> pplx::task { - this_pointer->m_blob->properties().set_content_md5(this_pointer->m_total_hash_provider.hash()); - return this_pointer->m_blob->upload_properties_async(this_pointer->m_condition, this_pointer->m_options, this_pointer->m_context); + this_pointer->m_blob->properties().set_content_md5(this_pointer->m_total_hash_provider.hash().md5()); + return this_pointer->m_blob->upload_properties_async_impl(this_pointer->m_condition, this_pointer->m_options, this_pointer->m_context, this_pointer->m_cancellation_token, this_pointer->m_use_request_level_timeout, this_pointer->m_timer_handler); }); } else @@ -196,7 +197,8 @@ namespace azure { namespace storage { namespace core { { this_pointer->m_condition.set_append_position(offset); auto previous_results_count = this_pointer->m_context.request_results().size(); - this_pointer->m_blob->append_block_async(buffer->stream(), buffer->content_md5(), this_pointer->m_condition, this_pointer->m_options, this_pointer->m_context).then([this_pointer, previous_results_count](pplx::task upload_task) + pplx::task task; + this_pointer->m_blob->append_block_async_impl(buffer->stream(), buffer->content_checksum(), this_pointer->m_condition, this_pointer->m_options, this_pointer->m_context, this_pointer->m_cancellation_token, this_pointer->m_use_request_level_timeout, this_pointer->m_timer_handler).then([this_pointer, previous_results_count](pplx::task upload_task) { std::lock_guard guard(this_pointer->m_semaphore, std::adopt_lock); try @@ -247,8 +249,8 @@ namespace azure { namespace storage { namespace core { auto this_pointer = std::dynamic_pointer_cast(shared_from_this()); return _sync().then([this_pointer](bool) -> pplx::task { - this_pointer->m_blob->properties().set_content_md5(this_pointer->m_total_hash_provider.hash()); - return this_pointer->m_blob->upload_properties_async(this_pointer->m_condition, this_pointer->m_options, this_pointer->m_context); + this_pointer->m_blob->properties().set_content_md5(this_pointer->m_total_hash_provider.hash().md5()); + return this_pointer->m_blob->upload_properties_async_impl(this_pointer->m_condition, this_pointer->m_options, this_pointer->m_context, this_pointer->m_cancellation_token, this_pointer->m_use_request_level_timeout, this_pointer->m_timer_handler); }); } else diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_blob_shared.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_blob_shared.cpp index 7cd89a3d..073ae730 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_blob_shared.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_blob_shared.cpp @@ -32,6 +32,7 @@ namespace azure { namespace storage { { m_etag = parsed_properties.etag(); m_last_modified = parsed_properties.last_modified(); + m_version_id = parsed_properties.version_id(); } void cloud_blob_properties::copy_from_root(const cloud_blob_properties& root_blob_properties) @@ -47,6 +48,7 @@ namespace azure { namespace storage { m_content_language = root_blob_properties.m_content_language; m_content_md5 = root_blob_properties.m_content_md5; m_content_type = root_blob_properties.m_content_type; + m_encryption_key_sha256 = root_blob_properties.m_encryption_key_sha256; } void cloud_blob_properties::update_size(const cloud_blob_properties& parsed_properties) @@ -64,16 +66,14 @@ namespace azure { namespace storage { m_append_blob_committed_block_count = parsed_properties.append_blob_committed_block_count(); } - void cloud_blob_properties::update_all(const cloud_blob_properties& parsed_properties, bool ignore_md5) + void cloud_blob_properties::update_all(const cloud_blob_properties& parsed_properties) { if ((type() != blob_type::unspecified) && (type() != parsed_properties.type())) { throw storage_exception(protocol::error_blob_type_mismatch, false); } - utility::string_t content_md5(ignore_md5 ? m_content_md5 : parsed_properties.content_md5()); *this = parsed_properties; - m_content_md5 = content_md5; } }} // namespace azure::storage diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_block_blob.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_block_blob.cpp index 3816c35f..5c9914d5 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_block_blob.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_block_blob.cpp @@ -22,66 +22,86 @@ namespace azure { namespace storage { - pplx::task cloud_block_blob::upload_block_async(const utility::string_t& block_id, concurrency::streams::istream block_data, const utility::string_t& content_md5, const access_condition& condition, const blob_request_options& options, operation_context context) const + pplx::task cloud_block_blob::upload_block_async_impl(const utility::string_t& block_id, concurrency::streams::istream block_data, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_timeout, std::shared_ptr timer_handler) const { assert_no_snapshot(); blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), type()); - bool needs_md5 = content_md5.empty() && modified_options.use_transactional_md5(); + bool needs_md5 = modified_options.use_transactional_md5() && !content_checksum.is_md5(); + bool needs_crc64 = modified_options.use_transactional_crc64() && !content_checksum.is_crc64(); + checksum_type needs_checksum = checksum_type::none; + if (needs_md5) + { + needs_checksum = checksum_type::md5; + } + else if (needs_crc64) + { + needs_checksum = checksum_type::crc64; + } - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, (modified_options.is_maximum_execution_time_customized() && use_timeout), timer_handler); command->set_authentication_handler(service_client().authentication_handler()); command->set_preprocess_response(std::bind(protocol::preprocess_response_void, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); - return core::istream_descriptor::create(block_data, needs_md5, std::numeric_limits::max(), protocol::max_block_size).then([command, context, block_id, content_md5, modified_options, condition](core::istream_descriptor request_body) -> pplx::task + return core::istream_descriptor::create(block_data, needs_checksum, std::numeric_limits::max(), protocol::max_block_size, command->get_cancellation_token()).then([command, context, block_id, content_checksum, modified_options, condition](core::istream_descriptor request_body) -> pplx::task { - const utility::string_t& md5 = content_md5.empty() ? request_body.content_md5() : content_md5; - command->set_build_request(std::bind(protocol::put_block, block_id, md5, condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + const auto& checksum = content_checksum.empty() ? request_body.content_checksum() : content_checksum; + command->set_build_request(std::bind(protocol::put_block, block_id, checksum, condition, modified_options, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_request_body(request_body); return core::executor::execute_async(command, modified_options, context); }); } - pplx::task cloud_block_blob::upload_block_list_async(const std::vector& block_list, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_block_blob::upload_block_list_async_impl(const std::vector& block_list, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_timeout, std::shared_ptr timer_handler) { assert_no_snapshot(); blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), type()); bool needs_md5 = modified_options.use_transactional_md5(); + bool needs_crc64 = modified_options.use_transactional_crc64(); + checksum_type needs_checksum = checksum_type::none; + if (needs_md5) + { + needs_checksum = checksum_type::md5; + } + else if (needs_crc64) + { + needs_checksum = checksum_type::crc64; + } protocol::block_list_writer writer; concurrency::streams::istream stream(concurrency::streams::bytestream::open_istream(writer.write(block_list))); auto properties = m_properties; - - auto command = std::make_shared>(uri()); + + auto command = std::make_shared>(uri(), cancellation_token, (modified_options.is_maximum_execution_time_customized() && use_timeout), timer_handler); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_properties(response)); }); - return core::istream_descriptor::create(stream, needs_md5).then([command, properties, this, context, modified_options, condition] (core::istream_descriptor request_body) -> pplx::task + return core::istream_descriptor::create(stream, needs_checksum, std::numeric_limits::max(), std::numeric_limits::max(), command->get_cancellation_token()).then([command, properties, this, context, modified_options, condition](core::istream_descriptor request_body) -> pplx::task { - command->set_build_request(std::bind(protocol::put_block_list, *properties, metadata(), request_body.content_md5(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_build_request(std::bind(protocol::put_block_list, *properties, metadata(), request_body.content_checksum(), condition, modified_options, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_request_body(request_body); return core::executor::execute_async(command, modified_options, context); }); } - pplx::task> cloud_block_blob::download_block_list_async(block_listing_filter listing_filter, const access_condition& condition, const blob_request_options& options, operation_context context) const + pplx::task> cloud_block_blob::download_block_list_async(block_listing_filter listing_filter, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), type()); auto properties = m_properties; - auto command = std::make_shared>>(uri()); + auto command = std::make_shared>>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::get_block_list, listing_filter, snapshot_time(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_location_mode(core::command_location_mode::primary_or_secondary); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) -> std::vector + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) -> std::vector { protocol::preprocess_response_void(response, result, context); properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_properties(response)); @@ -92,10 +112,11 @@ namespace azure { namespace storage { protocol::block_list_reader reader(response.body()); return pplx::task_from_result(reader.move_result()); }); + return core::executor>::execute_async(command, modified_options, context); } - pplx::task cloud_block_blob::open_write_async(const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_block_blob::open_write_async_impl(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_request_level_timeout, std::shared_ptr timer_handler) { assert_no_snapshot(); blob_request_options modified_options(options); @@ -104,7 +125,7 @@ namespace azure { namespace storage { pplx::task check_condition_task; if (condition.is_conditional()) { - check_condition_task = download_attributes_async(condition, modified_options, context).then([condition] (pplx::task download_attributes_task) + check_condition_task = download_attributes_async_impl(condition, modified_options, context, cancellation_token, false, timer_handler).then([condition, timer_handler](pplx::task download_attributes_task) { try { @@ -131,13 +152,13 @@ namespace azure { namespace storage { } auto instance = std::make_shared(*this); - return check_condition_task.then([instance, condition, modified_options, context] () + return check_condition_task.then([instance, condition, modified_options, context, cancellation_token, use_request_level_timeout, timer_handler]() { - return core::cloud_block_blob_ostreambuf(instance, condition, modified_options, context).create_ostream(); + return core::cloud_block_blob_ostreambuf(instance, condition, modified_options, context, cancellation_token, use_request_level_timeout, timer_handler).create_ostream(); }); } - pplx::task cloud_block_blob::upload_from_stream_async(concurrency::streams::istream source, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_block_blob::upload_from_stream_async(concurrency::streams::istream source, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { assert_no_snapshot(); blob_request_options modified_options(options); @@ -171,41 +192,112 @@ namespace azure { namespace storage { auto properties = m_properties; auto metadata = m_metadata; - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, true); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_properties(response)); }); - return core::istream_descriptor::create(source, modified_options.store_blob_content_md5(), length).then([command, context, properties, metadata, condition, modified_options] (core::istream_descriptor request_body) -> pplx::task + + bool needs_md5 = modified_options.store_blob_content_md5(); + bool needs_crc64 = modified_options.use_transactional_crc64(); + checksum_type need_checksum = checksum_type::none; + if (needs_md5) + { + need_checksum = checksum_type::md5; + } + else if (needs_crc64) + { + need_checksum = checksum_type::crc64; + } + + return core::istream_descriptor::create(source, need_checksum, length, protocol::max_single_blob_upload_threshold, command->get_cancellation_token()).then([command, context, properties, metadata, condition, modified_options](core::istream_descriptor request_body) -> pplx::task { - if (!request_body.content_md5().empty()) + if (request_body.content_checksum().is_md5()) + { + properties->set_content_md5(request_body.content_checksum().md5()); + } + checksum content_checksum; + if (request_body.content_checksum().is_crc64()) { - properties->set_content_md5(request_body.content_md5()); + content_checksum = request_body.content_checksum(); } - command->set_build_request(std::bind(protocol::put_block_blob, *properties, *metadata, condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_build_request(std::bind(protocol::put_block_blob, content_checksum, *properties, *metadata, condition, modified_options, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_request_body(request_body); return core::executor::execute_async(command, modified_options, context); }); } - return open_write_async(condition, modified_options, context).then([source, length] (concurrency::streams::ostream blob_stream) -> pplx::task + // Check if the total required blocks for the upload exceeds the maximum allowable block limit. + // Adjusts the block size to ensure a successful upload only if the value has not been explicitly set. + // Otherwise, throws a storage_exception if the default value has been changed or if the blob size exceeds the maximum capacity. + if (length != std::numeric_limits::max()) { - return core::stream_copy_async(source, blob_stream, length).then([blob_stream] (utility::size64_t) -> pplx::task + auto totalBlocks = std::ceil(static_cast(length) / static_cast(modified_options.stream_write_size_in_bytes())); + + // Check if the total required blocks for the upload exceeds the maximum allowable block limit. + if (totalBlocks > protocol::max_block_number) { - return blob_stream.close(); + if (modified_options.stream_write_size_in_bytes().has_value() || length > protocol::max_block_blob_size) + { + throw storage_exception(protocol::error_blob_over_max_block_limit); + } + else + { + // Scale the block size to ensure a successful upload (only if the user did not specify a value). + modified_options.set_stream_write_size_in_bytes(static_cast(std::ceil(static_cast(length)) / protocol::max_block_number)); + } + } + } + + auto timer_handler = std::make_shared(cancellation_token); + + if (modified_options.is_maximum_execution_time_customized()) + { + timer_handler->start_timer(options.maximum_execution_time());// azure::storage::core::timer_handler will automatically stop the timer when destructed. + } + + return open_write_async_impl(condition, modified_options, context, timer_handler->get_cancellation_token(), false, timer_handler).then([source, length, timer_handler](concurrency::streams::ostream blob_stream) -> pplx::task + { + return core::stream_copy_async(source, blob_stream, length, std::numeric_limits::max(), timer_handler->get_cancellation_token(), timer_handler).then([blob_stream, timer_handler](pplx::task copy_task)->pplx::task + { + return blob_stream.close().then([timer_handler, copy_task](pplx::task close_task) + { + try + { + copy_task.wait(); + } + catch (const std::exception&) + { + try + { + close_task.wait(); + } + catch (...) + { + } + throw; + } + close_task.wait(); + }); }); }); } - pplx::task cloud_block_blob::upload_from_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_block_blob::upload_from_file_async(const utility::string_t &path, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { auto instance = std::make_shared(*this); - return concurrency::streams::file_stream::open_istream(path).then([instance, condition, options, context] (concurrency::streams::istream stream) -> pplx::task + return concurrency::streams::file_stream::open_istream(path).then([instance, condition, options, context, cancellation_token] (concurrency::streams::istream stream) -> pplx::task { - return instance->upload_from_stream_async(stream, condition, options, context).then([stream] (pplx::task upload_task) -> pplx::task + utility::size64_t remaining_stream_length = core::get_remaining_stream_length(stream); + if (remaining_stream_length == std::numeric_limits::max()) + { + throw storage_exception(protocol::error_stream_length_unknown); + } + + return instance->upload_from_stream_async(stream, std::numeric_limits::max(), condition, options, context, cancellation_token).then([stream] (pplx::task upload_task) -> pplx::task { return stream.close().then([upload_task]() { @@ -215,21 +307,21 @@ namespace azure { namespace storage { }); } - pplx::task cloud_block_blob::upload_text_async(const utility::string_t& content, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_block_blob::upload_text_async(const utility::string_t& content, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { auto utf8_body = utility::conversions::to_utf8string(content); auto length = utf8_body.size(); auto stream = concurrency::streams::bytestream::open_istream(std::move(utf8_body)); m_properties->set_content_type(protocol::header_value_content_type_utf8); - return upload_from_stream_async(stream, length, condition, options, context); + return upload_from_stream_async(stream, length, condition, options, context, cancellation_token); } - pplx::task cloud_block_blob::download_text_async(const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_block_blob::download_text_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { auto properties = m_properties; concurrency::streams::container_buffer> buffer; - return download_to_stream_async(buffer.create_ostream(), condition, options, context).then([buffer, properties] () mutable -> utility::string_t + return download_to_stream_async(buffer.create_ostream(), condition, options, context, cancellation_token).then([buffer, properties] () mutable -> utility::string_t { if (properties->content_type() != protocol::header_value_content_type_utf8) { @@ -241,4 +333,44 @@ namespace azure { namespace storage { }); } + pplx::task cloud_block_blob::set_standard_blob_tier_async(const standard_blob_tier tier, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + blob_request_options modified_options(options); + modified_options.apply_defaults(service_client().default_request_options(), type()); + + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); + utility::string_t tier_str; + + switch (tier) + { + case standard_blob_tier::archive: + tier_str = protocol::header_value_access_tier_archive; + break; + + case standard_blob_tier::hot: + tier_str = protocol::header_value_access_tier_hot; + break; + + case standard_blob_tier::cool: + tier_str = protocol::header_value_access_tier_cool; + break; + + default: + tier_str = protocol::header_value_access_tier_unknown; + break; + } + + auto properties = m_properties; + + command->set_build_request(std::bind(protocol::set_blob_tier, tier_str, condition, modified_options, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_authentication_handler(service_client().authentication_handler()); + command->set_preprocess_response([properties, tier](const web::http::http_response& response, const request_result& result, operation_context context) -> void + { + protocol::preprocess_response_void(response, result, context); + properties->m_standard_blob_tier = tier; + }); + + return core::executor::execute_async(command, modified_options, context); + } + }} // namespace azure::storage diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_client.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_client.cpp index 19d36bd0..bbe5edaf 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_client.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_client.cpp @@ -22,9 +22,9 @@ namespace azure { namespace storage { - pplx::task cloud_client::download_service_properties_base_async(const request_options& modified_options, operation_context context) const + pplx::task cloud_client::download_service_properties_base_async(const request_options& modified_options, operation_context context, const pplx::cancellation_token& cancellation_token) const { - auto command = std::make_shared>(base_uri()); + auto command = std::make_shared>(base_uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::get_service_properties, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(authentication_handler()); command->set_location_mode(core::command_location_mode::primary_or_secondary); @@ -37,25 +37,30 @@ namespace azure { namespace storage { return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_client::upload_service_properties_base_async(const service_properties& properties, const service_properties_includes& includes, const request_options& modified_options, operation_context context) const + pplx::task cloud_client::upload_service_properties_base_async(const service_properties& properties, const service_properties_includes& includes, const request_options& modified_options, operation_context context, const pplx::cancellation_token& cancellation_token) const { protocol::service_properties_writer writer; concurrency::streams::istream stream(concurrency::streams::bytestream::open_istream(writer.write(properties, includes))); - auto command = std::make_shared>(base_uri()); + auto command = std::make_shared>(base_uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::set_service_properties, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(authentication_handler()); command->set_preprocess_response(std::bind(protocol::preprocess_response_void, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); - return core::istream_descriptor::create(stream).then([command, context, modified_options] (core::istream_descriptor request_body) -> pplx::task + return core::istream_descriptor::create(stream, checksum_type::none, std::numeric_limits::max(), std::numeric_limits::max(), command->get_cancellation_token()).then([command, context, modified_options, cancellation_token] (core::istream_descriptor request_body) -> pplx::task { command->set_request_body(request_body); return core::executor::execute_async(command, modified_options, context); }); } - pplx::task cloud_client::download_service_stats_base_async(const request_options& modified_options, operation_context context) const + pplx::task cloud_client::download_service_stats_base_async(const request_options& modified_options, operation_context context, const pplx::cancellation_token& cancellation_token) const { - auto command = std::make_shared>(base_uri()); + if (modified_options.location_mode() == location_mode::primary_only) + { + throw storage_exception("download_service_stats cannot be run with a 'primary_only' location mode."); + } + + auto command = std::make_shared>(base_uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::get_service_stats, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(authentication_handler()); command->set_location_mode(core::command_location_mode::primary_or_secondary); diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_common.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_common.cpp index 2754d25c..7dfcfeb0 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_common.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_common.cpp @@ -31,7 +31,7 @@ namespace azure { namespace storage { WASTORAGE_API request_options::request_options() : m_location_mode(azure::storage::location_mode::primary_only), m_http_buffer_size(protocol::default_buffer_size),\ m_maximum_execution_time(protocol::default_maximum_execution_time), m_server_timeout(protocol::default_server_timeout),\ - m_noactivity_timeout(protocol::default_noactivity_timeout) + m_noactivity_timeout(protocol::default_noactivity_timeout),m_validate_certificates(protocol::default_validate_certificates) { } diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_core.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_core.cpp index 8756ccfd..4c0eb306 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_core.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_core.cpp @@ -23,6 +23,10 @@ namespace azure { namespace storage { +#ifdef _WIN32 + static std::shared_ptr s_delayedScheduler; +#endif + storage_uri::storage_uri(web::http::uri primary_uri) : m_primary_uri(std::move(primary_uri)) { @@ -58,4 +62,26 @@ namespace azure { namespace storage { } } +#ifdef _WIN32 + void __cdecl set_wastorage_ambient_scheduler(const std::shared_ptr& scheduler) + { + pplx::set_ambient_scheduler(scheduler); + } + + const std::shared_ptr __cdecl get_wastorage_ambient_scheduler() + { + return pplx::get_ambient_scheduler(); + } + + void __cdecl set_wastorage_ambient_delayed_scheduler(const std::shared_ptr& scheduler) + { + s_delayedScheduler = scheduler; + } + + const std::shared_ptr& __cdecl get_wastorage_ambient_delayed_scheduler() + { + return s_delayedScheduler; + } +#endif + }} // namespace azure::storage diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_file.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_file.cpp index 7a447a53..e160d09c 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_file.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_file.cpp @@ -36,9 +36,27 @@ namespace azure { namespace storage { m_last_modified = other.last_modified(); } - void cloud_file_properties::update_etag(const cloud_file_properties& other) + void cloud_file_properties::update_acl_attributes_filetime_and_fileid(const cloud_file_properties& other) { - m_etag = other.etag(); + m_creation_time = other.m_creation_time; + m_creation_time_now = other.m_creation_time_now; + m_creation_time_preserve = other.m_creation_time_preserve; + m_last_write_time = other.m_last_write_time; + m_last_write_time_now = other.m_last_write_time_now; + m_last_write_time_preserve = other.m_last_write_time_preserve; + m_change_time = other.m_change_time; + m_permission = other.m_permission; + m_permission_key = other.m_permission_key; + m_attributes = other.m_attributes; + m_file_id = other.m_file_id; + m_parent_id = other.m_parent_id; + } + + void cloud_file_properties::update_lease(const cloud_file_properties& other) + { + m_lease_status = other.m_lease_status; + m_lease_state = other.m_lease_state; + m_lease_duration = other.m_lease_duration; } cloud_file::cloud_file(storage_uri uri) @@ -47,7 +65,7 @@ namespace azure { namespace storage { { init(std::move(storage_credentials())); } - + cloud_file::cloud_file(storage_uri uri, storage_credentials credentials) : m_uri(std::move(uri)), m_metadata(std::make_shared()), m_properties(std::make_shared()), m_copy_state(std::make_shared()) @@ -86,19 +104,20 @@ namespace azure { namespace storage { pplx::task cloud_file::create_async(int64_t length, const file_access_condition& access_condition, const file_request_options& options, operation_context context) { - UNREFERENCED_PARAMETER(access_condition); file_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options()); auto properties = m_properties; auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::create_file, length, metadata(), this->properties(), std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_build_request(std::bind(protocol::create_file, length, metadata(), this->properties(), access_condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_preprocess_response([properties, length](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); - properties->update_etag_and_last_modified(protocol::file_response_parsers::parse_file_properties(response)); + auto response_properties = protocol::file_response_parsers::parse_file_properties(response); + properties->update_etag_and_last_modified(response_properties); + properties->update_acl_attributes_filetime_and_fileid(response_properties); properties->m_length = length; }); return core::executor::execute_async(command, modified_options, context); @@ -131,14 +150,13 @@ namespace azure { namespace storage { pplx::task cloud_file::delete_file_async(const file_access_condition& access_condition, const file_request_options& options, operation_context context) { - UNREFERENCED_PARAMETER(access_condition); file_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options()); auto properties = m_properties; auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::delete_file, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_build_request(std::bind(protocol::delete_file, access_condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { @@ -174,7 +192,6 @@ namespace azure { namespace storage { pplx::task cloud_file::download_attributes_async(const file_access_condition& access_condition, const file_request_options& options, operation_context context) { - UNREFERENCED_PARAMETER(access_condition); file_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options()); @@ -183,7 +200,7 @@ namespace azure { namespace storage { auto copy_state = m_copy_state; auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::get_file_properties, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_build_request(std::bind(protocol::get_file_properties, access_condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_preprocess_response([metadata, properties, copy_state](const web::http::http_response& response, const request_result& result, operation_context context) { @@ -197,19 +214,20 @@ namespace azure { namespace storage { pplx::task cloud_file::upload_properties_async(const file_access_condition& access_condition, const file_request_options& options, operation_context context) const { - UNREFERENCED_PARAMETER(access_condition); file_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options()); auto properties = m_properties; auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::set_file_properties, this->properties(), std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_build_request(std::bind(protocol::set_file_properties, this->properties(), access_condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); - properties->update_etag_and_last_modified(protocol::file_response_parsers::parse_file_properties(response)); + auto response_properties = protocol::file_response_parsers::parse_file_properties(response); + properties->update_etag_and_last_modified(response_properties); + properties->update_acl_attributes_filetime_and_fileid(response_properties); }); return core::executor::execute_async(command, modified_options, context); @@ -217,19 +235,18 @@ namespace azure { namespace storage { pplx::task cloud_file::upload_metadata_async(const file_access_condition& access_condition, const file_request_options& options, operation_context context) const { - UNREFERENCED_PARAMETER(access_condition); file_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options()); auto properties = m_properties; auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::set_file_metadata, this->metadata(), std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_build_request(std::bind(protocol::set_file_metadata, this->metadata(), access_condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); - properties->update_etag(protocol::file_response_parsers::parse_file_properties(response)); + properties->update_etag_and_last_modified(protocol::file_response_parsers::parse_file_properties(response)); }); return core::executor::execute_async(command, modified_options, context); @@ -238,7 +255,6 @@ namespace azure { namespace storage { pplx::task cloud_file::start_copy_async(const web::http::uri& source, const file_access_condition& source_condition, const file_access_condition& dest_condition, const file_request_options& options, operation_context context) const { UNREFERENCED_PARAMETER(source_condition); - UNREFERENCED_PARAMETER(dest_condition); file_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options()); @@ -246,7 +262,7 @@ namespace azure { namespace storage { auto copy_state = m_copy_state; auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::copy_file, source, this->metadata(), std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_build_request(std::bind(protocol::copy_file, source, this->metadata(), dest_condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_preprocess_response([properties, copy_state](const web::http::http_response& response, const request_result& result, operation_context context) { @@ -269,7 +285,6 @@ namespace azure { namespace storage { pplx::task cloud_file::start_copy_async(const web::http::uri& source, const access_condition& source_condition, const file_access_condition& dest_condition, const file_request_options& options, operation_context context) const { - UNREFERENCED_PARAMETER(dest_condition); file_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options()); @@ -277,7 +292,7 @@ namespace azure { namespace storage { auto copy_state = m_copy_state; auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::copy_file_from_blob, source, source_condition, this->metadata(), std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_build_request(std::bind(protocol::copy_file_from_blob, source, source_condition, this->metadata(), dest_condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_preprocess_response([properties, copy_state](const web::http::http_response& response, const request_result& result, operation_context context) { @@ -306,27 +321,25 @@ namespace azure { namespace storage { pplx::task cloud_file::abort_copy_async(const utility::string_t& copy_id, const file_access_condition& access_condition, const file_request_options& options, operation_context context) const { - UNREFERENCED_PARAMETER(access_condition); file_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options()); auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::abort_copy_file, copy_id, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_build_request(std::bind(protocol::abort_copy_file, copy_id, access_condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_preprocess_response(std::bind(protocol::preprocess_response_void, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); return core::executor::execute_async(command, modified_options, context); } - + pplx::task> cloud_file::list_ranges_async(utility::size64_t start_offset, utility::size64_t length, const file_access_condition& access_condition, const file_request_options& options, operation_context context) const { - UNREFERENCED_PARAMETER(access_condition); file_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options()); auto properties = m_properties; auto command = std::make_shared>>(uri()); - command->set_build_request(std::bind(protocol::list_file_ranges, start_offset, length, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_build_request(std::bind(protocol::list_file_ranges, start_offset, length, access_condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) -> std::vector { @@ -349,7 +362,6 @@ namespace azure { namespace storage { pplx::task cloud_file::clear_range_async(utility::size64_t start_offset, utility::size64_t length, const file_access_condition& access_condition, const file_request_options& options, operation_context context) const { - UNREFERENCED_PARAMETER(access_condition); file_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options()); @@ -358,7 +370,7 @@ namespace azure { namespace storage { file_range range(start_offset, end_offset); auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::put_file_range, range, file_range_write::clear, utility::string_t(), std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_build_request(std::bind(protocol::put_file_range, range, file_range_write::clear, utility::string_t(), access_condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { @@ -369,10 +381,9 @@ namespace azure { namespace storage { }); return core::executor::execute_async(command, modified_options, context); } - + pplx::task cloud_file::write_range_async(Concurrency::streams::istream stream, int64_t start_offset, const utility::string_t& content_md5, const file_access_condition& access_condition, const file_request_options& options, operation_context context) const { - UNREFERENCED_PARAMETER(access_condition); file_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options()); @@ -388,12 +399,12 @@ namespace azure { namespace storage { properties->update_etag_and_last_modified(modified_properties); properties->m_content_md5 = modified_properties.content_md5(); }); - return core::istream_descriptor::create(stream, needs_md5, std::numeric_limits::max(), protocol::max_block_size).then([command, context, start_offset, content_md5, modified_options](core::istream_descriptor request_body)->pplx::task + return core::istream_descriptor::create(stream, needs_md5 ? checksum_type::md5 : checksum_type::none, std::numeric_limits::max(), protocol::max_range_size).then([command, context, start_offset, content_md5, access_condition, modified_options](core::istream_descriptor request_body)->pplx::task { - const utility::string_t& md5 = content_md5.empty() ? request_body.content_md5() : content_md5; + const utility::string_t& md5 = content_md5.empty() ? request_body.content_checksum().md5() : content_md5; auto end_offset = start_offset + request_body.length() - 1; file_range range(start_offset, end_offset); - command->set_build_request(std::bind(protocol::put_file_range, range, file_range_write::update, md5, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_build_request(std::bind(protocol::put_file_range, range, file_range_write::update, md5, access_condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_request_body(request_body); return core::executor::execute_async(command, modified_options, context); }); @@ -412,7 +423,6 @@ namespace azure { namespace storage { pplx::task cloud_file::download_single_range_to_stream_async(concurrency::streams::ostream target, utility::size64_t offset, utility::size64_t length, const file_access_condition& condition, const file_request_options& options, operation_context context, bool update_properties, bool validate_last_modify) const { - UNREFERENCED_PARAMETER(condition); file_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options()); @@ -429,7 +439,7 @@ namespace azure { namespace storage { std::shared_ptr> command = std::make_shared>(uri()); std::weak_ptr> weak_command(command); - command->set_build_request([offset, length, modified_options, download_info](web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) -> web::http::http_request + command->set_build_request([offset, length, condition, modified_options, download_info](web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) -> web::http::http_request { utility::size64_t current_offset = offset; utility::size64_t current_length = length; @@ -454,12 +464,12 @@ namespace azure { namespace storage { } } - return protocol::get_file(current_offset, current_length, modified_options.use_transactional_md5() && !download_info->m_are_properties_populated, uri_builder, timeout, context); + return protocol::get_file(current_offset, current_length, modified_options.use_transactional_md5() && !download_info->m_are_properties_populated, condition, uri_builder, timeout, context); }); command->set_authentication_handler(service_client().authentication_handler()); command->set_location_mode(core::command_location_mode::primary_or_secondary); command->set_destination_stream(target); - command->set_calculate_response_body_md5(!modified_options.disable_content_md5_validation()); + command->set_calculate_response_body_checksum(modified_options.disable_content_md5_validation() ? checksum_type::none : checksum_type::md5); command->set_recover_request([target, download_info](utility::size64_t total_written_to_destination_stream, operation_context context) -> bool { if (download_info->m_reset_target) @@ -483,7 +493,7 @@ namespace azure { namespace storage { download_info->m_total_written_to_destination_stream = total_written_to_destination_stream; } - return true; + return target.is_open(); }); command->set_preprocess_response([weak_command, offset, modified_options, properties, metadata, copy_state, download_info, update_properties, validate_last_modify](const web::http::http_response& response, const request_result& result, operation_context context) { @@ -529,10 +539,10 @@ namespace azure { namespace storage { // Consider the file has no MD5 hash in default. && offset < std::numeric_limits::max()) { - throw storage_exception(protocol::error_missing_md5); + throw storage_exception(protocol::error_missing_md5, false); } - // Lock to the current storage location when resuming a failed download. This is locked + // Lock to the current storage location when resuming a failed download. This is locked // early before the retry policy has the opportunity to change the storage location. command->set_location_mode(core::command_location_mode::primary_or_secondary, result.target_location()); @@ -551,7 +561,7 @@ namespace azure { namespace storage { command->set_location_mode(core::command_location_mode::primary_or_secondary); - if (!download_info->m_response_md5.empty() && !descriptor.content_md5().empty() && download_info->m_response_md5 != descriptor.content_md5()) + if (!download_info->m_response_md5.empty() && !descriptor.content_checksum().md5().empty() && download_info->m_response_md5 != descriptor.content_checksum().md5()) { throw storage_exception(protocol::error_md5_mismatch); } @@ -574,10 +584,23 @@ namespace azure { namespace storage { single_file_download_threshold = protocol::default_single_block_download_threshold; } + if (offset >= std::numeric_limits::max()) + { + if (length == 0) + { + offset = 0; + length = std::numeric_limits::max(); + } + else + { + throw std::invalid_argument("length"); + } + } + // download first range. // if 416 thrown, it's an empty blob. need to download attributes. // otherwise, properties must be updated for further parallel download. - return instance->download_single_range_to_stream_async(target, 0, single_file_download_threshold, condition, options, context, true).then([=](pplx::task download_task) + return instance->download_single_range_to_stream_async(target, offset, length < single_file_download_threshold ? length : single_file_download_threshold, condition, options, context, true).then([=](pplx::task download_task) { try { @@ -587,7 +610,7 @@ namespace azure { namespace storage { { // For empty blob, swallow the exception and update the attributes. if (e.result().http_status_code() == web::http::status_codes::RangeNotSatisfiable - && offset >= std::numeric_limits::max()) + && offset == 0) { return instance->download_attributes_async(condition, options, context); } @@ -597,28 +620,24 @@ namespace azure { namespace storage { } } - if ((offset >= std::numeric_limits::max() && instance->properties().size() <= single_file_download_threshold) - || (offset < std::numeric_limits::max() && length <= single_file_download_threshold)) - { - return pplx::task_from_result(); - } - // download the rest data in parallel. - utility::size64_t target_offset; - utility::size64_t target_length; - - if (offset >= std::numeric_limits::max()) + utility::size64_t target_offset = offset; + utility::size64_t target_length = length; + if (target_length >= std::numeric_limits::max() + || target_length > instance->properties().size() - offset) { - target_offset = single_file_download_threshold; - target_length = instance->properties().size() - single_file_download_threshold; + target_length = instance->properties().size() - offset; } - else + + // Download completes in first range download. + if (target_length <= single_file_download_threshold) { - target_offset = offset + single_file_download_threshold; - target_length = length - single_file_download_threshold; + return pplx::task_from_result(); } + target_offset += single_file_download_threshold; + target_length -= single_file_download_threshold; - return pplx::task_from_result().then([instance, target, target_offset, target_length, single_file_download_threshold, condition, options, context]() + return pplx::task_from_result().then([instance, offset, target, target_offset, target_length, single_file_download_threshold, condition, options, context]() { auto semaphore = std::make_shared(options.parallelism_factor()); // lock to the target ostream @@ -630,19 +649,20 @@ namespace azure { namespace storage { auto smallest_offset = std::make_shared(target_offset); auto condition_variable = std::make_shared(); std::mutex condition_variable_mutex; - for (utility::size64_t current_offset = target_offset; current_offset < target_offset + target_length; current_offset += protocol::single_block_size) + for (utility::size64_t current_offset = target_offset; current_offset < target_offset + target_length; current_offset += protocol::transactional_md5_block_size) { - utility::size64_t current_length = protocol::single_block_size; + utility::size64_t current_length = protocol::transactional_md5_block_size; if (current_offset + current_length > target_offset + target_length) { current_length = target_offset + target_length - current_offset; } - semaphore->lock_async().then([instance, &mutex, semaphore, condition_variable, &condition_variable_mutex, &writer, target, smallest_offset, current_offset, current_length, condition, options, context]() + semaphore->lock_async().then([instance, &mutex, semaphore, condition_variable, &condition_variable_mutex, &writer, offset, target, smallest_offset, current_offset, current_length, condition, options, context]() { concurrency::streams::container_buffer> buffer; auto segment_ostream = buffer.create_ostream(); // if trasaction MD5 is enabled, it will be checked inside each download_single_range_to_stream_async. - instance->download_single_range_to_stream_async(segment_ostream, current_offset, current_length, condition, options, context, false, true).then([buffer, segment_ostream, semaphore, condition_variable, &condition_variable_mutex, smallest_offset, current_offset, current_length, &mutex, target, &writer, options](pplx::task download_task) + instance->download_single_range_to_stream_async(segment_ostream, current_offset, current_length, condition, options, context) + .then([buffer, segment_ostream, semaphore, condition_variable, &condition_variable_mutex, smallest_offset, offset, current_offset, current_length, &mutex, target, &writer, options](pplx::task download_task) { segment_ostream.close().then([download_task](pplx::task close_task) { @@ -656,9 +676,9 @@ namespace azure { namespace storage { if (target.can_seek()) { pplx::extensibility::scoped_rw_lock_t guard(mutex); - target.streambuf().seekpos(current_offset, std::ios_base::out); + target.streambuf().seekpos(current_offset - offset, std::ios_base::out); target.streambuf().putn_nocopy(buffer.collection().data(), buffer.collection().size()).wait(); - *smallest_offset += protocol::single_block_size; + *smallest_offset += protocol::transactional_md5_block_size; released = true; semaphore->unlock(); } @@ -669,7 +689,7 @@ namespace azure { namespace storage { if (*smallest_offset == current_offset) { target.streambuf().putn_nocopy(buffer.collection().data(), buffer.collection().size()).wait(); - *smallest_offset += protocol::single_block_size; + *smallest_offset += protocol::transactional_md5_block_size; condition_variable->notify_all(); released = true; semaphore->unlock(); @@ -695,7 +715,7 @@ namespace azure { namespace storage { if (*smallest_offset == current_offset) { target.streambuf().putn_nocopy(buffer.collection().data(), buffer.collection().size()).wait(); - *smallest_offset += protocol::single_block_size; + *smallest_offset += protocol::transactional_md5_block_size; } else if (*smallest_offset > current_offset) { @@ -772,7 +792,7 @@ namespace azure { namespace storage { return core::cloud_file_ostreambuf(instance, instance->properties().length(), access_condition, modified_options, context).create_ostream(); }); } - + pplx::task cloud_file::open_write_async(utility::size64_t length, const file_access_condition& access_condition, const file_request_options& options, operation_context context) const { file_request_options modified_options(options); @@ -784,7 +804,7 @@ namespace azure { namespace storage { return core::cloud_file_ostreambuf(instance, length, access_condition, modified_options, context).create_ostream(); }); } - + pplx::task cloud_file::upload_from_stream_async(concurrency::streams::istream source, utility::size64_t length, const file_access_condition& access_condition, const file_request_options& options, operation_context context) const { file_request_options modified_options(options); @@ -807,7 +827,7 @@ namespace azure { namespace storage { }); }); } - + pplx::task cloud_file::upload_from_file_async(const utility::string_t& path, const file_access_condition& access_condition, const file_request_options& options, operation_context context) const { auto instance = std::make_shared(*this); @@ -822,7 +842,7 @@ namespace azure { namespace storage { }); }); } - + pplx::task cloud_file::upload_text_async(const utility::string_t& text, const file_access_condition& condition, const file_request_options& options, operation_context context) const { auto utf8_body = utility::conversions::to_utf8string(text); @@ -834,7 +854,6 @@ namespace azure { namespace storage { pplx::task cloud_file::resize_async(int64_t length, const file_access_condition& access_condition, const file_request_options& options, operation_context context) const { - UNREFERENCED_PARAMETER(access_condition); file_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options()); @@ -842,7 +861,7 @@ namespace azure { namespace storage { properties->m_length = length; auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::set_file_properties, this->properties(), std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_build_request(std::bind(protocol::resize_with_properties, this->properties(), access_condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { @@ -885,7 +904,6 @@ namespace azure { namespace storage { pplx::task cloud_file::exists_async(bool primary_only, const file_access_condition& access_condition, const file_request_options& options, operation_context context) const { - UNREFERENCED_PARAMETER(access_condition); file_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options()); @@ -893,7 +911,7 @@ namespace azure { namespace storage { auto metadata = m_metadata; auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::get_file_properties, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_build_request(std::bind(protocol::get_file_properties, access_condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_location_mode(primary_only ? core::command_location_mode::primary_only : core::command_location_mode::primary_or_secondary); command->set_preprocess_response([properties, metadata](const web::http::http_response& response, const request_result& result, operation_context context) @@ -910,4 +928,90 @@ namespace azure { namespace storage { return core::executor::execute_async(command, modified_options, context); } -}} \ No newline at end of file + pplx::task cloud_file::acquire_lease_async(const utility::string_t& proposed_lease_id, const file_access_condition& condition, const file_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const + { + file_request_options modified_options(options); + modified_options.apply_defaults(service_client().default_request_options()); + + auto properties = m_properties; + + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); + command->set_build_request(std::bind(protocol::lease_file, protocol::header_value_lease_acquire, proposed_lease_id, condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_authentication_handler(service_client().authentication_handler()); + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) -> utility::string_t + { + protocol::preprocess_response_void(response, result, context); + auto response_properties = protocol::file_response_parsers::parse_file_properties(response); + properties->update_etag_and_last_modified(response_properties); + properties->update_lease(response_properties); + return protocol::parse_lease_id(response); + }); + + return core::executor::execute_async(command, modified_options, context); + } + + pplx::task cloud_file::change_lease_async(const utility::string_t& proposed_lease_id, const file_access_condition& condition, const file_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const + { + file_request_options modified_options(options); + modified_options.apply_defaults(service_client().default_request_options()); + + auto properties = m_properties; + + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); + command->set_build_request(std::bind(protocol::lease_file, protocol::header_value_lease_change, proposed_lease_id, condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_authentication_handler(service_client().authentication_handler()); + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) -> utility::string_t + { + protocol::preprocess_response_void(response, result, context); + auto response_properties = protocol::file_response_parsers::parse_file_properties(response); + properties->update_etag_and_last_modified(response_properties); + properties->update_lease(response_properties); + return protocol::parse_lease_id(response); + }); + + return core::executor::execute_async(command, modified_options, context); + } + + pplx::task cloud_file::release_lease_async(const file_access_condition& condition, const file_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const + { + file_request_options modified_options(options); + modified_options.apply_defaults(service_client().default_request_options()); + + auto properties = m_properties; + + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); + command->set_build_request(std::bind(protocol::lease_file, protocol::header_value_lease_release, utility::string_t(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_authentication_handler(service_client().authentication_handler()); + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) -> void + { + protocol::preprocess_response_void(response, result, context); + auto response_properties = protocol::file_response_parsers::parse_file_properties(response); + properties->update_etag_and_last_modified(response_properties); + properties->update_lease(response_properties); + }); + + return core::executor::execute_async(command, modified_options, context); + } + + pplx::task cloud_file::break_lease_async(const file_access_condition& condition, const file_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const + { + file_request_options modified_options(options); + modified_options.apply_defaults(service_client().default_request_options()); + + auto properties = m_properties; + + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); + command->set_build_request(std::bind(protocol::lease_file, protocol::header_value_lease_break, utility::string_t(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_authentication_handler(service_client().authentication_handler()); + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) -> void + { + protocol::preprocess_response_void(response, result, context); + auto response_properties = protocol::file_response_parsers::parse_file_properties(response); + properties->update_etag_and_last_modified(response_properties); + properties->update_lease(response_properties); + }); + + return core::executor::execute_async(command, modified_options, context); + } + +}} diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_file_directory.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_file_directory.cpp index e1bbfe80..8dee5ed8 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_file_directory.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_file_directory.cpp @@ -31,11 +31,6 @@ namespace azure { namespace storage { m_last_modified = other.last_modified(); } - void cloud_file_directory_properties::update_etag(const cloud_file_directory_properties& other) - { - m_etag = other.etag(); - } - cloud_file_directory::cloud_file_directory(storage_uri uri) : m_uri(std::move(uri)), m_metadata(std::make_shared()), m_properties(std::make_shared()) { @@ -86,24 +81,24 @@ namespace azure { namespace storage { m_share = cloud_file_share(std::move(share_name), cloud_file_client(core::get_service_client_uri(m_uri), std::move(credentials))); } - list_file_and_diretory_result_iterator cloud_file_directory::list_files_and_directories(int64_t max_results, const file_request_options& options, operation_context context) const + list_file_and_diretory_result_iterator cloud_file_directory::list_files_and_directories(const utility::string_t& prefix, int64_t max_results, const file_request_options& options, operation_context context) const { auto instance = std::make_shared(*this); return list_file_and_diretory_result_iterator( - [instance, options, context](const continuation_token& token, size_t max_results_per_segment) + [instance, prefix, options, context](const continuation_token& token, size_t max_results_per_segment) { - return instance->list_files_and_directories_segmented(max_results_per_segment, token, options, context); + return instance->list_files_and_directories_segmented(prefix, max_results_per_segment, token, options, context); }, max_results, 0); } - pplx::task cloud_file_directory::list_files_and_directories_segmented_async(int64_t max_results, const continuation_token& token, const file_request_options& options, operation_context context) const + pplx::task cloud_file_directory::list_files_and_directories_segmented_async(const utility::string_t& prefix, int64_t max_results, const continuation_token& token, const file_request_options& options, operation_context context) const { file_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options()); auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::list_files_and_directories, max_results, token, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_build_request(std::bind(protocol::list_files_and_directories, prefix, max_results, token, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_location_mode(core::command_location_mode::primary_or_secondary, token.target_location()); command->set_preprocess_response(std::bind(protocol::preprocess_response, list_file_and_directory_result_segment(), std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); @@ -134,7 +129,7 @@ namespace azure { namespace storage { auto properties = m_properties; auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::create_file_directory, metadata(), std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_build_request(std::bind(protocol::create_file_directory, metadata(), this->properties(), std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { @@ -234,6 +229,26 @@ namespace azure { namespace storage { return core::executor::execute_async(command, modified_options, context); } + pplx::task cloud_file_directory::upload_properties_async(const file_access_condition& access_condition, const file_request_options& options, operation_context context) const + { + UNREFERENCED_PARAMETER(access_condition); + file_request_options modified_options(options); + modified_options.apply_defaults(service_client().default_request_options()); + + auto properties = m_properties; + + auto command = std::make_shared>(uri()); + command->set_build_request(std::bind(protocol::set_file_directory_properties, this->properties(), std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_authentication_handler(service_client().authentication_handler()); + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) + { + protocol::preprocess_response_void(response, result, context); + *properties = protocol::file_response_parsers::parse_file_directory_properties(response); + }); + + return core::executor::execute_async(command, modified_options, context); + } + pplx::task cloud_file_directory::upload_metadata_async(const file_access_condition& access_condition, const file_request_options& options, operation_context context) const { UNREFERENCED_PARAMETER(access_condition); @@ -248,7 +263,7 @@ namespace azure { namespace storage { command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); - properties->update_etag(protocol::file_response_parsers::parse_file_directory_properties(response)); + properties->update_etag_and_last_modified(protocol::file_response_parsers::parse_file_directory_properties(response)); }); return core::executor::execute_async(command, modified_options, context); } diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_file_ostreambuf.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_file_ostreambuf.cpp index 211f19a7..8f6d7de5 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_file_ostreambuf.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_file_ostreambuf.cpp @@ -48,7 +48,7 @@ namespace azure { namespace storage { namespace core { { if (this_pointer->m_total_hash_provider.is_enabled()) { - this_pointer->m_file->properties().set_content_md5(this_pointer->m_total_hash_provider.hash()); + this_pointer->m_file->properties().set_content_md5(this_pointer->m_total_hash_provider.hash().md5()); return this_pointer->m_file->upload_properties_async(this_pointer->m_condition, this_pointer->m_options, this_pointer->m_context); } @@ -74,7 +74,7 @@ namespace azure { namespace storage { namespace core { { try { - this_pointer->m_file->write_range_async(buffer->stream(), offset, buffer->content_md5(), this_pointer->m_condition, this_pointer->m_options, this_pointer->m_context).then([this_pointer](pplx::task upload_task) + this_pointer->m_file->write_range_async(buffer->stream(), offset, buffer->content_checksum().md5(), this_pointer->m_condition, this_pointer->m_options, this_pointer->m_context).then([this_pointer](pplx::task upload_task) { std::lock_guard guard(this_pointer->m_semaphore, std::adopt_lock); try diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_file_share.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_file_share.cpp index 491f5114..8392a4a8 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_file_share.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_file_share.cpp @@ -20,6 +20,7 @@ #include "was/error_code_strings.h" #include "wascore/protocol.h" #include "wascore/protocol_xml.h" +#include "wascore/protocol_json.h" #include "wascore/util.h" #include "wascore/constants.h" @@ -83,7 +84,7 @@ namespace azure { namespace storage { pplx::task cloud_file_share::create_async(const file_request_options& options, operation_context context) { - return create_async(protocol::maximum_share_quota, options, context); + return create_async(std::numeric_limits::max(), options, context); } pplx::task cloud_file_share::create_async(utility::size64_t max_size, const file_request_options& options, operation_context context) @@ -106,7 +107,7 @@ namespace azure { namespace storage { pplx::task cloud_file_share::create_if_not_exists_async(const file_request_options& options, operation_context context) { - return create_if_not_exists_async(protocol::maximum_share_quota, options, context); + return create_if_not_exists_async(std::numeric_limits::max(), options, context); } pplx::task cloud_file_share::create_if_not_exists_async(utility::size64_t max_size, const file_request_options& options, operation_context context) @@ -265,13 +266,25 @@ namespace azure { namespace storage { return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_file_share::download_share_usage_aysnc(const file_access_condition& condition, const file_request_options& options, operation_context context) const + pplx::task cloud_file_share::download_share_usage_async(const file_access_condition& condition, const file_request_options& options, operation_context context) const + { + UNREFERENCED_PARAMETER(condition); + + return download_share_usage_in_bytes_async(condition, options, context).then([](int64_t size_in_bytes) -> pplx::task + { + const int64_t bytes_per_gigabyte = 1024LL * 1024 * 1024; + int32_t size_in_gb = static_cast((size_in_bytes + bytes_per_gigabyte - 1) / bytes_per_gigabyte); + return pplx::task_from_result(size_in_gb); + }); + } + + pplx::task cloud_file_share::download_share_usage_in_bytes_async(const file_access_condition& condition, const file_request_options& options, operation_context context) const { UNREFERENCED_PARAMETER(condition); file_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options()); - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri()); command->set_build_request(std::bind(protocol::get_file_share_stats, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_preprocess_response([](const web::http::http_response& response, const request_result& result, operation_context context) @@ -280,7 +293,7 @@ namespace azure { namespace storage { protocol::get_share_stats_reader reader(response.body()); return reader.get(); }); - return core::executor::execute_async(command, modified_options, context); + return core::executor::execute_async(command, modified_options, context); } utility::string_t cloud_file_share::get_shared_access_signature(const file_shared_access_policy& policy, const utility::string_t& stored_policy_identifier, const cloud_file_shared_access_headers& headers) const @@ -318,15 +331,17 @@ namespace azure { namespace storage { file_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options()); + auto update_properties = m_properties; auto properties = cloud_file_share_properties(*m_properties); properties.m_quota = quota; auto command = std::make_shared>(uri()); command->set_build_request(std::bind(protocol::set_file_share_properties, properties, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([](const web::http::http_response& response, const request_result& result, operation_context context) + command->set_preprocess_response([update_properties](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); + update_properties->update_etag_and_last_modified(protocol::file_response_parsers::parse_file_share_properties(response)); }); return core::executor::execute_async(command, modified_options, context); } @@ -384,4 +399,53 @@ namespace azure { namespace storage { }); } -}} // namespace azure::storage \ No newline at end of file + pplx::task cloud_file_share::download_file_permission_async(const utility::string_t& permission_key, const file_access_condition& condition, const file_request_options& options, operation_context context) const + { + UNREFERENCED_PARAMETER(condition); + file_request_options modified_options(options); + modified_options.apply_defaults(service_client().default_request_options()); + + auto command = std::make_shared>(uri()); + command->set_build_request(std::bind(protocol::get_file_share_permission, permission_key, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_authentication_handler(service_client().authentication_handler()); + command->set_preprocess_response([](const web::http::http_response& response, const request_result& result, operation_context context) + { + protocol::preprocess_response_void(response, result, context); + return utility::string_t(); + }); + command->set_postprocess_response([](const web::http::http_response& response, const request_result&, const core::ostream_descriptor&, operation_context context) -> pplx::task + { + return response.extract_json(/* ignore_content_type */ true).then([](const web::json::value& obj) -> pplx::task + { + return pplx::task_from_result(protocol::parse_file_permission(obj)); + }); + }); + return core::executor::execute_async(command, modified_options, context); + } + + pplx::task cloud_file_share::upload_file_permission_async(const utility::string_t& permission, const file_access_condition& condition, const file_request_options& options, operation_context context) const + { + UNREFERENCED_PARAMETER(condition); + file_request_options modified_options(options); + modified_options.apply_defaults(service_client().default_request_options()); + + auto command = std::make_shared>(uri()); + command->set_build_request(std::bind(protocol::set_file_share_permission, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_authentication_handler(service_client().authentication_handler()); + command->set_preprocess_response([](const web::http::http_response& response, const request_result& result, operation_context context) + { + protocol::preprocess_response_void(response, result, context); + const auto& headers = response.headers(); + auto ite = headers.find(protocol::ms_header_file_permission_key); + return ite == headers.end() ? utility::string_t() : ite->second; + }); + + concurrency::streams::istream stream(concurrency::streams::bytestream::open_istream(utility::conversions::to_utf8string(protocol::construct_file_permission(permission)))); + return core::istream_descriptor::create(stream).then([command, context, modified_options](core::istream_descriptor request_body) + { + command->set_request_body(request_body); + return core::executor::execute_async(command, modified_options, context); + }); + } + +}} // namespace azure::storage diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_page_blob.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_page_blob.cpp index 34c4c118..397fd026 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_page_blob.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_page_blob.cpp @@ -22,7 +22,7 @@ namespace azure { namespace storage { - pplx::task cloud_page_blob::clear_pages_async(int64_t start_offset, int64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_page_blob::clear_pages_async(int64_t start_offset, int64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { assert_no_snapshot(); blob_request_options modified_options(options); @@ -33,10 +33,10 @@ namespace azure { namespace storage { auto properties = m_properties; - auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::put_page, range, page_write::clear, utility::string_t(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); + command->set_build_request(std::bind(protocol::put_page, range, page_write::clear, checksum(checksum_none), condition, modified_options, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); @@ -47,18 +47,28 @@ namespace azure { namespace storage { return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_page_blob::upload_pages_async(concurrency::streams::istream page_data, int64_t start_offset, const utility::string_t& content_md5, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_page_blob::upload_pages_async_impl(concurrency::streams::istream page_data, int64_t start_offset, const checksum& content_checksum, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_timeout, std::shared_ptr timer_handler) { assert_no_snapshot(); blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), type()); auto properties = m_properties; - bool needs_md5 = content_md5.empty() && modified_options.use_transactional_md5(); - - auto command = std::make_shared>(uri()); + bool needs_md5 = modified_options.use_transactional_md5() && !content_checksum.is_md5(); + bool needs_crc64 = modified_options.use_transactional_crc64() && !content_checksum.is_crc64(); + checksum_type needs_checksum = checksum_type::none; + if (needs_md5) + { + needs_checksum = checksum_type::md5; + } + else if (needs_crc64) + { + needs_checksum = checksum_type::crc64; + } + + auto command = std::make_shared>(uri(), cancellation_token, (modified_options.is_maximum_execution_time_customized() && use_timeout), timer_handler); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); @@ -66,18 +76,18 @@ namespace azure { namespace storage { properties->update_etag_and_last_modified(parsed_properties); properties->update_page_blob_sequence_number(parsed_properties); }); - return core::istream_descriptor::create(page_data, needs_md5, std::numeric_limits::max(), protocol::max_block_size).then([command, context, start_offset, content_md5, modified_options, condition](core::istream_descriptor request_body) -> pplx::task + return core::istream_descriptor::create(page_data, needs_checksum, std::numeric_limits::max(), protocol::max_page_size, command->get_cancellation_token()).then([command, context, start_offset, content_checksum, modified_options, condition, cancellation_token](core::istream_descriptor request_body) -> pplx::task { - const utility::string_t& md5 = content_md5.empty() ? request_body.content_md5() : content_md5; + const auto& checksum = content_checksum.empty() ? request_body.content_checksum() : content_checksum; auto end_offset = start_offset + request_body.length() - 1; page_range range(start_offset, end_offset); - command->set_build_request(std::bind(protocol::put_page, range, page_write::update, md5, condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_build_request(std::bind(protocol::put_page, range, page_write::update, checksum, condition, modified_options, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_request_body(request_body); return core::executor::execute_async(command, modified_options, context); }); } - pplx::task cloud_page_blob::open_write_async(const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_page_blob::open_write_async(const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { assert_no_snapshot(); blob_request_options modified_options(options); @@ -89,31 +99,37 @@ namespace azure { namespace storage { } auto instance = std::make_shared(*this); - return instance->download_attributes_async(condition, modified_options, context).then([instance, condition, modified_options, context] () -> concurrency::streams::ostream + return instance->download_attributes_async(condition, modified_options, context, cancellation_token).then([instance, condition, modified_options, context, cancellation_token] () -> concurrency::streams::ostream { - return core::cloud_page_blob_ostreambuf(instance, instance->properties().size(), condition, modified_options, context).create_ostream(); + return core::cloud_page_blob_ostreambuf(instance, instance->properties().size(), condition, modified_options, context, cancellation_token, true, nullptr).create_ostream(); }); } - pplx::task cloud_page_blob::open_write_async(utility::size64_t size, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_page_blob::open_write_async_impl(utility::size64_t size, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token, bool use_request_level_timeout, std::shared_ptr timer_handler) { assert_no_snapshot(); blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), type(), false); auto instance = std::make_shared(*this); - return instance->create_async(size, sequence_number, condition, modified_options, context).then([instance, size, condition, modified_options, context]() -> concurrency::streams::ostream + return instance->create_async(size, sequence_number, condition, modified_options, context).then([instance, size, condition, modified_options, context, cancellation_token, use_request_level_timeout, timer_handler]() -> concurrency::streams::ostream { - return core::cloud_page_blob_ostreambuf(instance, size, condition, modified_options, context).create_ostream(); + return core::cloud_page_blob_ostreambuf(instance, size, condition, modified_options, context, cancellation_token, use_request_level_timeout, timer_handler).create_ostream(); }); } - pplx::task cloud_page_blob::upload_from_stream_async(concurrency::streams::istream source, utility::size64_t length, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_page_blob::upload_from_stream_async(concurrency::streams::istream source, utility::size64_t length, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { assert_no_snapshot(); + std::shared_ptr timer_handler = std::make_shared(cancellation_token); blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), type()); + if (modified_options.is_maximum_execution_time_customized()) + { + timer_handler->start_timer(options.maximum_execution_time());// azure::storage::core::timer_handler will automatically stop the timer when destructed. + } + if (length == std::numeric_limits::max()) { length = core::get_remaining_stream_length(source); @@ -123,19 +139,37 @@ namespace azure { namespace storage { } } - return open_write_async(length, sequence_number, condition, modified_options, context).then([source, length](concurrency::streams::ostream blob_stream) -> pplx::task + return open_write_async_impl(length, sequence_number, condition, modified_options, context, timer_handler->get_cancellation_token(), false, timer_handler).then([source, length, timer_handler, options](concurrency::streams::ostream blob_stream) -> pplx::task { - return core::stream_copy_async(source, blob_stream, length).then([blob_stream] (utility::size64_t) -> pplx::task + return core::stream_copy_async(source, blob_stream, length, std::numeric_limits::max(), timer_handler->get_cancellation_token(), timer_handler).then([blob_stream, timer_handler, options] (pplx::task copy_task) -> pplx::task { - return blob_stream.close(); + return blob_stream.close().then([timer_handler, copy_task](pplx::task close_task) + { + try + { + copy_task.wait(); + } + catch (const std::exception&) + { + try + { + close_task.wait(); + } + catch (...) + { + } + throw; + } + close_task.wait(); + }); }); }); } - pplx::task cloud_page_blob::upload_from_file_async(const utility::string_t& path, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_page_blob::upload_from_file_async(const utility::string_t& path, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { auto instance = std::make_shared(*this); - return concurrency::streams::file_stream::open_istream(path).then([instance, sequence_number, condition, options, context](concurrency::streams::istream stream) -> pplx::task + return concurrency::streams::file_stream::open_istream(path).then([instance, sequence_number, condition, options, context, cancellation_token](concurrency::streams::istream stream) -> pplx::task { return instance->upload_from_stream_async(stream, sequence_number, condition, options, context).then([stream](pplx::task upload_task) -> pplx::task { @@ -147,7 +181,7 @@ namespace azure { namespace storage { }); } - pplx::task cloud_page_blob::create_async(utility::size64_t size, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_page_blob::create_async(utility::size64_t size, const premium_blob_tier tier, int64_t sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { assert_no_snapshot(); blob_request_options modified_options(options); @@ -155,30 +189,31 @@ namespace azure { namespace storage { auto properties = m_properties; - auto command = std::make_shared>(uri()); - command->set_build_request(std::bind(protocol::put_page_blob, size, sequence_number, *properties, metadata(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); + command->set_build_request(std::bind(protocol::put_page_blob, size, get_premium_access_tier_string(tier), sequence_number, *properties, metadata(), condition, modified_options, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties, size] (const web::http::http_response& response, const request_result& result, operation_context context) + command->set_preprocess_response([properties, size, tier](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); properties->update_etag_and_last_modified(protocol::blob_response_parsers::parse_blob_properties(response)); properties->m_size = size; + properties->m_premium_blob_tier = tier; }); return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_page_blob::resize_async(utility::size64_t size, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_page_blob::resize_async(utility::size64_t size, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { assert_no_snapshot(); blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), type()); auto properties = m_properties; - - auto command = std::make_shared>(uri()); + + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::resize_page_blob, size, condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties, size] (const web::http::http_response& response, const request_result& result, operation_context context) + command->set_preprocess_response([properties, size](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); @@ -190,7 +225,7 @@ namespace azure { namespace storage { return core::executor::execute_async(command, modified_options, context); } - pplx::task cloud_page_blob::set_sequence_number_async(const azure::storage::sequence_number& sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context) + pplx::task cloud_page_blob::set_sequence_number_async(const azure::storage::sequence_number& sequence_number, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) { assert_no_snapshot(); blob_request_options modified_options(options); @@ -198,10 +233,10 @@ namespace azure { namespace storage { auto properties = m_properties; - auto command = std::make_shared>(uri()); + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::set_page_blob_sequence_number, sequence_number, condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) { protocol::preprocess_response_void(response, result, context); @@ -212,18 +247,18 @@ namespace azure { namespace storage { return core::executor::execute_async(command, modified_options, context); } - pplx::task> cloud_page_blob::download_page_ranges_async(utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) const + pplx::task> cloud_page_blob::download_page_ranges_async(utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), type()); auto properties = m_properties; - auto command = std::make_shared>>(uri()); + auto command = std::make_shared>>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); command->set_build_request(std::bind(protocol::get_page_ranges, offset, length, snapshot_time(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_location_mode(core::command_location_mode::primary_or_secondary); - command->set_preprocess_response([properties] (const web::http::http_response& response, const request_result& result, operation_context context) -> std::vector + command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) -> std::vector { protocol::preprocess_response_void(response, result, context); @@ -241,15 +276,15 @@ namespace azure { namespace storage { return core::executor>::execute_async(command, modified_options, context); } - pplx::task> cloud_page_blob::download_page_ranges_diff_async(utility::string_t previous_snapshot_time, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context) const + pplx::task> cloud_page_blob::download_page_ranges_diff_async_impl(const utility::string_t& previous_snapshot_time, const utility::string_t& previous_snapshot_url, utility::size64_t offset, utility::size64_t length, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) const { blob_request_options modified_options(options); modified_options.apply_defaults(service_client().default_request_options(), type()); auto properties = m_properties; - auto command = std::make_shared>>(uri()); - command->set_build_request(std::bind(protocol::get_page_ranges_diff, previous_snapshot_time, offset, length, snapshot_time(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + auto command = std::make_shared>>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); + command->set_build_request(std::bind(protocol::get_page_ranges_diff, previous_snapshot_time, previous_snapshot_url, offset, length, snapshot_time(), condition, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_location_mode(core::command_location_mode::primary_or_secondary); command->set_preprocess_response([properties](const web::http::http_response& response, const request_result& result, operation_context context) -> std::vector @@ -269,4 +304,52 @@ namespace azure { namespace storage { }); return core::executor>::execute_async(command, modified_options, context); } + + pplx::task cloud_page_blob::start_incremental_copy_async(const web::http::uri& source, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + assert_no_snapshot(); + blob_request_options modified_options(options); + modified_options.apply_defaults(service_client().default_request_options(), type()); + + auto copy_state = m_copy_state; + + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); + command->set_build_request(std::bind(protocol::incremental_copy_blob, source, condition, metadata(), std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_authentication_handler(service_client().authentication_handler()); + command->set_preprocess_response([copy_state](const web::http::http_response& response, const request_result& result, operation_context context) -> utility::string_t + { + protocol::preprocess_response_void(response, result, context); + auto new_state = protocol::response_parsers::parse_copy_state(response); + *copy_state = new_state; + return new_state.copy_id(); + }); + return core::executor::execute_async(command, modified_options, context); + } + + pplx::task cloud_page_blob::start_incremental_copy_async(const cloud_page_blob& source, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + web::http::uri raw_source_uri = source.snapshot_qualified_uri().primary_uri(); + web::http::uri source_uri = source.service_client().credentials().transform_uri(raw_source_uri); + + return start_incremental_copy_async(source_uri, condition, options, context, cancellation_token); + } + + pplx::task cloud_page_blob::set_premium_blob_tier_async(const premium_blob_tier tier, const access_condition& condition, const blob_request_options& options, operation_context context, const pplx::cancellation_token& cancellation_token) + { + blob_request_options modified_options(options); + modified_options.apply_defaults(service_client().default_request_options(), type()); + + auto command = std::make_shared>(uri(), cancellation_token, modified_options.is_maximum_execution_time_customized()); + + auto properties = m_properties; + + command->set_build_request(std::bind(protocol::set_blob_tier, get_premium_access_tier_string(tier), condition, modified_options, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_authentication_handler(service_client().authentication_handler()); + command->set_preprocess_response([properties, tier](const web::http::http_response& response, const request_result& result, operation_context context) -> void + { + protocol::preprocess_response_void(response, result, context); + properties->m_premium_blob_tier = tier; + }); + return core::executor::execute_async(command, modified_options, context); + } }} // namespace azure::storage diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_queue.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_queue.cpp index f958bcc9..e0f509f2 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_queue.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_queue.cpp @@ -88,14 +88,9 @@ namespace azure { namespace storage { pplx::task cloud_queue::add_message_async(cloud_queue_message& message, std::chrono::seconds time_to_live, std::chrono::seconds initial_visibility_timeout, queue_request_options& options, operation_context context) { - if (time_to_live.count() <= 0LL) + if ((time_to_live.count() <= 0LL) && (time_to_live.count() != -1LL)) { - throw std::invalid_argument(protocol::error_non_positive_time_to_live); - } - - if (time_to_live.count() > 604800LL) - { - throw std::invalid_argument(protocol::error_large_time_to_live); + throw std::invalid_argument(protocol::error_invalid_value_time_to_live); } if (initial_visibility_timeout.count() < 0LL) @@ -114,6 +109,21 @@ namespace azure { namespace storage { command->set_build_request(std::bind(protocol::add_message, message, time_to_live, initial_visibility_timeout, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_preprocess_response(std::bind(protocol::preprocess_response_void, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_postprocess_response([&message](const web::http::http_response& response, const request_result&, const core::ostream_descriptor&, operation_context context) -> pplx::task + { + UNREFERENCED_PARAMETER(context); + protocol::message_reader reader(response.body()); + std::vector queue_items = reader.move_items(); + + if (!queue_items.empty()) + { + protocol::cloud_message_list_item& item = queue_items.front(); + cloud_queue_message message_info(item.move_content(), item.move_id(), item.move_pop_receipt(), item.insertion_time(), item.expiration_time(), item.next_visible_time(), item.dequeue_count()); + message.update_message_info(message_info); + } + + return pplx::task_from_result(); + }); return core::executor::execute_async(command, modified_options, context); } diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_queue_client.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_queue_client.cpp index 9dbbfd3b..27ac0e0e 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_queue_client.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_queue_client.cpp @@ -140,6 +140,10 @@ namespace azure { namespace storage { { set_authentication_handler(std::make_shared(std::move(creds))); } + else if (creds.is_bearer_token()) + { + set_authentication_handler(std::make_shared(std::move(creds))); + } else { set_authentication_handler(std::make_shared()); diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_queue_message.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_queue_message.cpp index 163d87aa..e05f1a00 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_queue_message.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_queue_message.cpp @@ -20,6 +20,15 @@ namespace azure { namespace storage { - const std::chrono::seconds max_time_to_live(7 * 24 * 60 * 60); + const std::chrono::seconds max_time_to_live(std::chrono::system_clock::duration::max().count()); + + void cloud_queue_message::update_message_info(const cloud_queue_message& message_metadata) + { + m_id = message_metadata.m_id; + m_insertion_time = message_metadata.m_insertion_time; + m_expiration_time = message_metadata.m_expiration_time; + m_pop_receipt = message_metadata.m_pop_receipt; + m_next_visible_time = message_metadata.m_next_visible_time; + } }} // namespace azure::storage diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_storage_account.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_storage_account.cpp index 1333f11a..ee6bc52e 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_storage_account.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_storage_account.cpp @@ -47,6 +47,7 @@ namespace azure { namespace storage { const utility::char_t *default_queue_hostname_prefix(_XPLATSTR("queue")); const utility::char_t *default_table_hostname_prefix(_XPLATSTR("table")); const utility::char_t *default_file_hostname_prefix(_XPLATSTR("file")); + const utility::char_t *oauth_access_token_setting_string(_XPLATSTR("OAuthAccessToken")); storage_uri construct_default_endpoint(const utility::string_t& scheme, const utility::string_t& account_name, const utility::string_t& hostname_prefix, const utility::string_t& endpoint_suffix) { @@ -447,6 +448,14 @@ namespace azure { namespace storage { result.append(_XPLATSTR("=")); result.append((export_secrets ? m_credentials.sas_token() : _XPLATSTR("[key hidden]"))); } + + if (m_credentials.is_bearer_token()) + { + result.append(_XPLATSTR(";")); + result.append(oauth_access_token_setting_string); + result.append(_XPLATSTR("=")); + result.append((export_secrets ? m_credentials.bearer_token() : _XPLATSTR("[key hidden]"))); + } } return result; diff --git a/Microsoft.WindowsAzure.Storage/src/cloud_table.cpp b/Microsoft.WindowsAzure.Storage/src/cloud_table.cpp index ade9e2c8..769c7c46 100644 --- a/Microsoft.WindowsAzure.Storage/src/cloud_table.cpp +++ b/Microsoft.WindowsAzure.Storage/src/cloud_table.cpp @@ -192,13 +192,13 @@ namespace azure { namespace storage { throw std::invalid_argument(protocol::error_batch_operation_retrieve_mix); } - // TODO: Pre-create a stream for the response to pass to response handler in other functions too so the response doesn't need to be copied - Concurrency::streams::stringstreambuf response_buffer; + concurrency::streams::container_buffer> response_buffer; std::shared_ptr>> command = std::make_shared>>(uri); - command->set_build_request(std::bind(protocol::execute_batch_operation, response_buffer, *this, operation, options.payload_format(), is_query, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); + command->set_build_request(std::bind(protocol::execute_batch_operation, *this, operation, options.payload_format(), is_query, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_authentication_handler(service_client().authentication_handler()); command->set_location_mode(is_query ? core::command_location_mode::primary_or_secondary : core::command_location_mode::primary_only); + command->set_destination_stream(response_buffer.create_ostream()); command->set_preprocess_response(std::bind(protocol::preprocess_response>, std::vector(), std::placeholders::_1, std::placeholders::_2, std::placeholders::_3)); command->set_postprocess_response([response_buffer, operations, is_query] (const web::http::http_response& response, const request_result&, const core::ostream_descriptor&, operation_context context) mutable -> pplx::task> { diff --git a/Microsoft.WindowsAzure.Storage/src/constants.cpp b/Microsoft.WindowsAzure.Storage/src/constants.cpp index a9fe2b44..01ae12fb 100644 --- a/Microsoft.WindowsAzure.Storage/src/constants.cpp +++ b/Microsoft.WindowsAzure.Storage/src/constants.cpp @@ -16,12 +16,13 @@ // ----------------------------------------------------------------------------------------- #include "stdafx.h" +#include "wascore/constants.h" #include "wascore/basic_types.h" namespace azure { namespace storage { namespace protocol { #define _CONSTANTS -#define DAT(a, b) WASTORAGE_API const utility::char_t* a = b; +#define DAT(a, b) WASTORAGE_API const utility::char_t a[] = b; #include "wascore/constants.dat" #undef DAT #undef _CONSTANTS diff --git a/Microsoft.WindowsAzure.Storage/src/crc64.cpp b/Microsoft.WindowsAzure.Storage/src/crc64.cpp new file mode 100644 index 00000000..2755a4f1 --- /dev/null +++ b/Microsoft.WindowsAzure.Storage/src/crc64.cpp @@ -0,0 +1,773 @@ +// ----------------------------------------------------------------------------------------- +// +// Copyright 2019 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +// +// ----------------------------------------------------------------------------------------- + +#include "stdafx.h" +#include "was/crc64.h" + +namespace azure { namespace storage { + + static constexpr uint64_t poly = 0x9A6C9329AC4BC9B5ULL; + static constexpr uint64_t m_u1[] = + { + 0x0000000000000000ULL, 0x7f6ef0c830358979ULL, 0xfedde190606b12f2ULL, 0x81b31158505e9b8bULL, + 0xc962e5739841b68fULL, 0xb60c15bba8743ff6ULL, 0x37bf04e3f82aa47dULL, 0x48d1f42bc81f2d04ULL, + 0xa61cecb46814fe75ULL, 0xd9721c7c5821770cULL, 0x58c10d24087fec87ULL, 0x27affdec384a65feULL, + 0x6f7e09c7f05548faULL, 0x1010f90fc060c183ULL, 0x91a3e857903e5a08ULL, 0xeecd189fa00bd371ULL, + 0x78e0ff3b88be6f81ULL, 0x078e0ff3b88be6f8ULL, 0x863d1eabe8d57d73ULL, 0xf953ee63d8e0f40aULL, + 0xb1821a4810ffd90eULL, 0xceecea8020ca5077ULL, 0x4f5ffbd87094cbfcULL, 0x30310b1040a14285ULL, + 0xdefc138fe0aa91f4ULL, 0xa192e347d09f188dULL, 0x2021f21f80c18306ULL, 0x5f4f02d7b0f40a7fULL, + 0x179ef6fc78eb277bULL, 0x68f0063448deae02ULL, 0xe943176c18803589ULL, 0x962de7a428b5bcf0ULL, + 0xf1c1fe77117cdf02ULL, 0x8eaf0ebf2149567bULL, 0x0f1c1fe77117cdf0ULL, 0x7072ef2f41224489ULL, + 0x38a31b04893d698dULL, 0x47cdebccb908e0f4ULL, 0xc67efa94e9567b7fULL, 0xb9100a5cd963f206ULL, + 0x57dd12c379682177ULL, 0x28b3e20b495da80eULL, 0xa900f35319033385ULL, 0xd66e039b2936bafcULL, + 0x9ebff7b0e12997f8ULL, 0xe1d10778d11c1e81ULL, 0x606216208142850aULL, 0x1f0ce6e8b1770c73ULL, + 0x8921014c99c2b083ULL, 0xf64ff184a9f739faULL, 0x77fce0dcf9a9a271ULL, 0x08921014c99c2b08ULL, + 0x4043e43f0183060cULL, 0x3f2d14f731b68f75ULL, 0xbe9e05af61e814feULL, 0xc1f0f56751dd9d87ULL, + 0x2f3dedf8f1d64ef6ULL, 0x50531d30c1e3c78fULL, 0xd1e00c6891bd5c04ULL, 0xae8efca0a188d57dULL, + 0xe65f088b6997f879ULL, 0x9931f84359a27100ULL, 0x1882e91b09fcea8bULL, 0x67ec19d339c963f2ULL, + 0xd75adabd7a6e2d6fULL, 0xa8342a754a5ba416ULL, 0x29873b2d1a053f9dULL, 0x56e9cbe52a30b6e4ULL, + 0x1e383fcee22f9be0ULL, 0x6156cf06d21a1299ULL, 0xe0e5de5e82448912ULL, 0x9f8b2e96b271006bULL, + 0x71463609127ad31aULL, 0x0e28c6c1224f5a63ULL, 0x8f9bd7997211c1e8ULL, 0xf0f5275142244891ULL, + 0xb824d37a8a3b6595ULL, 0xc74a23b2ba0eececULL, 0x46f932eaea507767ULL, 0x3997c222da65fe1eULL, + 0xafba2586f2d042eeULL, 0xd0d4d54ec2e5cb97ULL, 0x5167c41692bb501cULL, 0x2e0934dea28ed965ULL, + 0x66d8c0f56a91f461ULL, 0x19b6303d5aa47d18ULL, 0x980521650afae693ULL, 0xe76bd1ad3acf6feaULL, + 0x09a6c9329ac4bc9bULL, 0x76c839faaaf135e2ULL, 0xf77b28a2faafae69ULL, 0x8815d86aca9a2710ULL, + 0xc0c42c4102850a14ULL, 0xbfaadc8932b0836dULL, 0x3e19cdd162ee18e6ULL, 0x41773d1952db919fULL, + 0x269b24ca6b12f26dULL, 0x59f5d4025b277b14ULL, 0xd846c55a0b79e09fULL, 0xa72835923b4c69e6ULL, + 0xeff9c1b9f35344e2ULL, 0x90973171c366cd9bULL, 0x1124202993385610ULL, 0x6e4ad0e1a30ddf69ULL, + 0x8087c87e03060c18ULL, 0xffe938b633338561ULL, 0x7e5a29ee636d1eeaULL, 0x0134d92653589793ULL, + 0x49e52d0d9b47ba97ULL, 0x368bddc5ab7233eeULL, 0xb738cc9dfb2ca865ULL, 0xc8563c55cb19211cULL, + 0x5e7bdbf1e3ac9decULL, 0x21152b39d3991495ULL, 0xa0a63a6183c78f1eULL, 0xdfc8caa9b3f20667ULL, + 0x97193e827bed2b63ULL, 0xe877ce4a4bd8a21aULL, 0x69c4df121b863991ULL, 0x16aa2fda2bb3b0e8ULL, + 0xf86737458bb86399ULL, 0x8709c78dbb8deae0ULL, 0x06bad6d5ebd3716bULL, 0x79d4261ddbe6f812ULL, + 0x3105d23613f9d516ULL, 0x4e6b22fe23cc5c6fULL, 0xcfd833a67392c7e4ULL, 0xb0b6c36e43a74e9dULL, + 0x9a6c9329ac4bc9b5ULL, 0xe50263e19c7e40ccULL, 0x64b172b9cc20db47ULL, 0x1bdf8271fc15523eULL, + 0x530e765a340a7f3aULL, 0x2c608692043ff643ULL, 0xadd397ca54616dc8ULL, 0xd2bd67026454e4b1ULL, + 0x3c707f9dc45f37c0ULL, 0x431e8f55f46abeb9ULL, 0xc2ad9e0da4342532ULL, 0xbdc36ec59401ac4bULL, + 0xf5129aee5c1e814fULL, 0x8a7c6a266c2b0836ULL, 0x0bcf7b7e3c7593bdULL, 0x74a18bb60c401ac4ULL, + 0xe28c6c1224f5a634ULL, 0x9de29cda14c02f4dULL, 0x1c518d82449eb4c6ULL, 0x633f7d4a74ab3dbfULL, + 0x2bee8961bcb410bbULL, 0x548079a98c8199c2ULL, 0xd53368f1dcdf0249ULL, 0xaa5d9839ecea8b30ULL, + 0x449080a64ce15841ULL, 0x3bfe706e7cd4d138ULL, 0xba4d61362c8a4ab3ULL, 0xc52391fe1cbfc3caULL, + 0x8df265d5d4a0eeceULL, 0xf29c951de49567b7ULL, 0x732f8445b4cbfc3cULL, 0x0c41748d84fe7545ULL, + 0x6bad6d5ebd3716b7ULL, 0x14c39d968d029fceULL, 0x95708ccedd5c0445ULL, 0xea1e7c06ed698d3cULL, + 0xa2cf882d2576a038ULL, 0xdda178e515432941ULL, 0x5c1269bd451db2caULL, 0x237c997575283bb3ULL, + 0xcdb181ead523e8c2ULL, 0xb2df7122e51661bbULL, 0x336c607ab548fa30ULL, 0x4c0290b2857d7349ULL, + 0x04d364994d625e4dULL, 0x7bbd94517d57d734ULL, 0xfa0e85092d094cbfULL, 0x856075c11d3cc5c6ULL, + 0x134d926535897936ULL, 0x6c2362ad05bcf04fULL, 0xed9073f555e26bc4ULL, 0x92fe833d65d7e2bdULL, + 0xda2f7716adc8cfb9ULL, 0xa54187de9dfd46c0ULL, 0x24f29686cda3dd4bULL, 0x5b9c664efd965432ULL, + 0xb5517ed15d9d8743ULL, 0xca3f8e196da80e3aULL, 0x4b8c9f413df695b1ULL, 0x34e26f890dc31cc8ULL, + 0x7c339ba2c5dc31ccULL, 0x035d6b6af5e9b8b5ULL, 0x82ee7a32a5b7233eULL, 0xfd808afa9582aa47ULL, + 0x4d364994d625e4daULL, 0x3258b95ce6106da3ULL, 0xb3eba804b64ef628ULL, 0xcc8558cc867b7f51ULL, + 0x8454ace74e645255ULL, 0xfb3a5c2f7e51db2cULL, 0x7a894d772e0f40a7ULL, 0x05e7bdbf1e3ac9deULL, + 0xeb2aa520be311aafULL, 0x944455e88e0493d6ULL, 0x15f744b0de5a085dULL, 0x6a99b478ee6f8124ULL, + 0x224840532670ac20ULL, 0x5d26b09b16452559ULL, 0xdc95a1c3461bbed2ULL, 0xa3fb510b762e37abULL, + 0x35d6b6af5e9b8b5bULL, 0x4ab846676eae0222ULL, 0xcb0b573f3ef099a9ULL, 0xb465a7f70ec510d0ULL, + 0xfcb453dcc6da3dd4ULL, 0x83daa314f6efb4adULL, 0x0269b24ca6b12f26ULL, 0x7d0742849684a65fULL, + 0x93ca5a1b368f752eULL, 0xeca4aad306bafc57ULL, 0x6d17bb8b56e467dcULL, 0x12794b4366d1eea5ULL, + 0x5aa8bf68aecec3a1ULL, 0x25c64fa09efb4ad8ULL, 0xa4755ef8cea5d153ULL, 0xdb1bae30fe90582aULL, + 0xbcf7b7e3c7593bd8ULL, 0xc399472bf76cb2a1ULL, 0x422a5673a732292aULL, 0x3d44a6bb9707a053ULL, + 0x759552905f188d57ULL, 0x0afba2586f2d042eULL, 0x8b48b3003f739fa5ULL, 0xf42643c80f4616dcULL, + 0x1aeb5b57af4dc5adULL, 0x6585ab9f9f784cd4ULL, 0xe436bac7cf26d75fULL, 0x9b584a0fff135e26ULL, + 0xd389be24370c7322ULL, 0xace74eec0739fa5bULL, 0x2d545fb4576761d0ULL, 0x523aaf7c6752e8a9ULL, + 0xc41748d84fe75459ULL, 0xbb79b8107fd2dd20ULL, 0x3acaa9482f8c46abULL, 0x45a459801fb9cfd2ULL, + 0x0d75adabd7a6e2d6ULL, 0x721b5d63e7936bafULL, 0xf3a84c3bb7cdf024ULL, 0x8cc6bcf387f8795dULL, + 0x620ba46c27f3aa2cULL, 0x1d6554a417c62355ULL, 0x9cd645fc4798b8deULL, 0xe3b8b53477ad31a7ULL, + 0xab69411fbfb21ca3ULL, 0xd407b1d78f8795daULL, 0x55b4a08fdfd90e51ULL, 0x2ada5047efec8728ULL, + }; + + static constexpr uint64_t m_u32[] = + { + 0x0000000000000000ULL, 0xb8c533c1177eb231ULL, 0x455341d1766af709ULL, 0xfd96721061144538ULL, + 0x8aa683a2ecd5ee12ULL, 0x3263b063fbab5c23ULL, 0xcff5c2739abf191bULL, 0x7730f1b28dc1ab2aULL, + 0x21942116813c4f4fULL, 0x995112d79642fd7eULL, 0x64c760c7f756b846ULL, 0xdc025306e0280a77ULL, + 0xab32a2b46de9a15dULL, 0x13f791757a97136cULL, 0xee61e3651b835654ULL, 0x56a4d0a40cfde465ULL, + 0x4328422d02789e9eULL, 0xfbed71ec15062cafULL, 0x067b03fc74126997ULL, 0xbebe303d636cdba6ULL, + 0xc98ec18feead708cULL, 0x714bf24ef9d3c2bdULL, 0x8cdd805e98c78785ULL, 0x3418b39f8fb935b4ULL, + 0x62bc633b8344d1d1ULL, 0xda7950fa943a63e0ULL, 0x27ef22eaf52e26d8ULL, 0x9f2a112be25094e9ULL, + 0xe81ae0996f913fc3ULL, 0x50dfd35878ef8df2ULL, 0xad49a14819fbc8caULL, 0x158c92890e857afbULL, + 0x8650845a04f13d3cULL, 0x3e95b79b138f8f0dULL, 0xc303c58b729bca35ULL, 0x7bc6f64a65e57804ULL, + 0x0cf607f8e824d32eULL, 0xb4333439ff5a611fULL, 0x49a546299e4e2427ULL, 0xf16075e889309616ULL, + 0xa7c4a54c85cd7273ULL, 0x1f01968d92b3c042ULL, 0xe297e49df3a7857aULL, 0x5a52d75ce4d9374bULL, + 0x2d6226ee69189c61ULL, 0x95a7152f7e662e50ULL, 0x6831673f1f726b68ULL, 0xd0f454fe080cd959ULL, + 0xc578c6770689a3a2ULL, 0x7dbdf5b611f71193ULL, 0x802b87a670e354abULL, 0x38eeb467679de69aULL, + 0x4fde45d5ea5c4db0ULL, 0xf71b7614fd22ff81ULL, 0x0a8d04049c36bab9ULL, 0xb24837c58b480888ULL, + 0xe4ece76187b5ecedULL, 0x5c29d4a090cb5edcULL, 0xa1bfa6b0f1df1be4ULL, 0x197a9571e6a1a9d5ULL, + 0x6e4a64c36b6002ffULL, 0xd68f57027c1eb0ceULL, 0x2b1925121d0af5f6ULL, 0x93dc16d30a7447c7ULL, + 0x38782ee75175e913ULL, 0x80bd1d26460b5b22ULL, 0x7d2b6f36271f1e1aULL, 0xc5ee5cf73061ac2bULL, + 0xb2dead45bda00701ULL, 0x0a1b9e84aadeb530ULL, 0xf78dec94cbcaf008ULL, 0x4f48df55dcb44239ULL, + 0x19ec0ff1d049a65cULL, 0xa1293c30c737146dULL, 0x5cbf4e20a6235155ULL, 0xe47a7de1b15de364ULL, + 0x934a8c533c9c484eULL, 0x2b8fbf922be2fa7fULL, 0xd619cd824af6bf47ULL, 0x6edcfe435d880d76ULL, + 0x7b506cca530d778dULL, 0xc3955f0b4473c5bcULL, 0x3e032d1b25678084ULL, 0x86c61eda321932b5ULL, + 0xf1f6ef68bfd8999fULL, 0x4933dca9a8a62baeULL, 0xb4a5aeb9c9b26e96ULL, 0x0c609d78deccdca7ULL, + 0x5ac44ddcd23138c2ULL, 0xe2017e1dc54f8af3ULL, 0x1f970c0da45bcfcbULL, 0xa7523fccb3257dfaULL, + 0xd062ce7e3ee4d6d0ULL, 0x68a7fdbf299a64e1ULL, 0x95318faf488e21d9ULL, 0x2df4bc6e5ff093e8ULL, + 0xbe28aabd5584d42fULL, 0x06ed997c42fa661eULL, 0xfb7beb6c23ee2326ULL, 0x43bed8ad34909117ULL, + 0x348e291fb9513a3dULL, 0x8c4b1adeae2f880cULL, 0x71dd68cecf3bcd34ULL, 0xc9185b0fd8457f05ULL, + 0x9fbc8babd4b89b60ULL, 0x2779b86ac3c62951ULL, 0xdaefca7aa2d26c69ULL, 0x622af9bbb5acde58ULL, + 0x151a0809386d7572ULL, 0xaddf3bc82f13c743ULL, 0x504949d84e07827bULL, 0xe88c7a195979304aULL, + 0xfd00e89057fc4ab1ULL, 0x45c5db514082f880ULL, 0xb853a9412196bdb8ULL, 0x00969a8036e80f89ULL, + 0x77a66b32bb29a4a3ULL, 0xcf6358f3ac571692ULL, 0x32f52ae3cd4353aaULL, 0x8a301922da3de19bULL, + 0xdc94c986d6c005feULL, 0x6451fa47c1beb7cfULL, 0x99c78857a0aaf2f7ULL, 0x2102bb96b7d440c6ULL, + 0x56324a243a15ebecULL, 0xeef779e52d6b59ddULL, 0x13610bf54c7f1ce5ULL, 0xaba438345b01aed4ULL, + 0x70f05dcea2ebd226ULL, 0xc8356e0fb5956017ULL, 0x35a31c1fd481252fULL, 0x8d662fdec3ff971eULL, + 0xfa56de6c4e3e3c34ULL, 0x4293edad59408e05ULL, 0xbf059fbd3854cb3dULL, 0x07c0ac7c2f2a790cULL, + 0x51647cd823d79d69ULL, 0xe9a14f1934a92f58ULL, 0x14373d0955bd6a60ULL, 0xacf20ec842c3d851ULL, + 0xdbc2ff7acf02737bULL, 0x6307ccbbd87cc14aULL, 0x9e91beabb9688472ULL, 0x26548d6aae163643ULL, + 0x33d81fe3a0934cb8ULL, 0x8b1d2c22b7edfe89ULL, 0x768b5e32d6f9bbb1ULL, 0xce4e6df3c1870980ULL, + 0xb97e9c414c46a2aaULL, 0x01bbaf805b38109bULL, 0xfc2ddd903a2c55a3ULL, 0x44e8ee512d52e792ULL, + 0x124c3ef521af03f7ULL, 0xaa890d3436d1b1c6ULL, 0x571f7f2457c5f4feULL, 0xefda4ce540bb46cfULL, + 0x98eabd57cd7aede5ULL, 0x202f8e96da045fd4ULL, 0xddb9fc86bb101aecULL, 0x657ccf47ac6ea8ddULL, + 0xf6a0d994a61aef1aULL, 0x4e65ea55b1645d2bULL, 0xb3f39845d0701813ULL, 0x0b36ab84c70eaa22ULL, + 0x7c065a364acf0108ULL, 0xc4c369f75db1b339ULL, 0x39551be73ca5f601ULL, 0x819028262bdb4430ULL, + 0xd734f8822726a055ULL, 0x6ff1cb4330581264ULL, 0x9267b953514c575cULL, 0x2aa28a924632e56dULL, + 0x5d927b20cbf34e47ULL, 0xe55748e1dc8dfc76ULL, 0x18c13af1bd99b94eULL, 0xa0040930aae70b7fULL, + 0xb5889bb9a4627184ULL, 0x0d4da878b31cc3b5ULL, 0xf0dbda68d208868dULL, 0x481ee9a9c57634bcULL, + 0x3f2e181b48b79f96ULL, 0x87eb2bda5fc92da7ULL, 0x7a7d59ca3edd689fULL, 0xc2b86a0b29a3daaeULL, + 0x941cbaaf255e3ecbULL, 0x2cd9896e32208cfaULL, 0xd14ffb7e5334c9c2ULL, 0x698ac8bf444a7bf3ULL, + 0x1eba390dc98bd0d9ULL, 0xa67f0accdef562e8ULL, 0x5be978dcbfe127d0ULL, 0xe32c4b1da89f95e1ULL, + 0x48887329f39e3b35ULL, 0xf04d40e8e4e08904ULL, 0x0ddb32f885f4cc3cULL, 0xb51e0139928a7e0dULL, + 0xc22ef08b1f4bd527ULL, 0x7aebc34a08356716ULL, 0x877db15a6921222eULL, 0x3fb8829b7e5f901fULL, + 0x691c523f72a2747aULL, 0xd1d961fe65dcc64bULL, 0x2c4f13ee04c88373ULL, 0x948a202f13b63142ULL, + 0xe3bad19d9e779a68ULL, 0x5b7fe25c89092859ULL, 0xa6e9904ce81d6d61ULL, 0x1e2ca38dff63df50ULL, + 0x0ba03104f1e6a5abULL, 0xb36502c5e698179aULL, 0x4ef370d5878c52a2ULL, 0xf636431490f2e093ULL, + 0x8106b2a61d334bb9ULL, 0x39c381670a4df988ULL, 0xc455f3776b59bcb0ULL, 0x7c90c0b67c270e81ULL, + 0x2a34101270daeae4ULL, 0x92f123d367a458d5ULL, 0x6f6751c306b01dedULL, 0xd7a2620211ceafdcULL, + 0xa09293b09c0f04f6ULL, 0x1857a0718b71b6c7ULL, 0xe5c1d261ea65f3ffULL, 0x5d04e1a0fd1b41ceULL, + 0xced8f773f76f0609ULL, 0x761dc4b2e011b438ULL, 0x8b8bb6a28105f100ULL, 0x334e8563967b4331ULL, + 0x447e74d11bbae81bULL, 0xfcbb47100cc45a2aULL, 0x012d35006dd01f12ULL, 0xb9e806c17aaead23ULL, + 0xef4cd66576534946ULL, 0x5789e5a4612dfb77ULL, 0xaa1f97b40039be4fULL, 0x12daa47517470c7eULL, + 0x65ea55c79a86a754ULL, 0xdd2f66068df81565ULL, 0x20b91416ecec505dULL, 0x987c27d7fb92e26cULL, + 0x8df0b55ef5179897ULL, 0x3535869fe2692aa6ULL, 0xc8a3f48f837d6f9eULL, 0x7066c74e9403ddafULL, + 0x075636fc19c27685ULL, 0xbf93053d0ebcc4b4ULL, 0x4205772d6fa8818cULL, 0xfac044ec78d633bdULL, + 0xac649448742bd7d8ULL, 0x14a1a789635565e9ULL, 0xe937d599024120d1ULL, 0x51f2e658153f92e0ULL, + 0x26c217ea98fe39caULL, 0x9e07242b8f808bfbULL, 0x6391563bee94cec3ULL, 0xdb5465faf9ea7cf2ULL, + + 0x0000000000000000ULL, 0xf6f734b768e04748ULL, 0xd9374f3d89571dfbULL, 0x2fc07b8ae1b75ab3ULL, + 0x86b7b8284a39a89dULL, 0x70408c9f22d9efd5ULL, 0x5f80f715c36eb566ULL, 0xa977c3a2ab8ef22eULL, + 0x39b65603cce4c251ULL, 0xcf4162b4a4048519ULL, 0xe081193e45b3dfaaULL, 0x16762d892d5398e2ULL, + 0xbf01ee2b86dd6accULL, 0x49f6da9cee3d2d84ULL, 0x6636a1160f8a7737ULL, 0x90c195a1676a307fULL, + 0x736cac0799c984a2ULL, 0x859b98b0f129c3eaULL, 0xaa5be33a109e9959ULL, 0x5cacd78d787ede11ULL, + 0xf5db142fd3f02c3fULL, 0x032c2098bb106b77ULL, 0x2cec5b125aa731c4ULL, 0xda1b6fa53247768cULL, + 0x4adafa04552d46f3ULL, 0xbc2dceb33dcd01bbULL, 0x93edb539dc7a5b08ULL, 0x651a818eb49a1c40ULL, + 0xcc6d422c1f14ee6eULL, 0x3a9a769b77f4a926ULL, 0x155a0d119643f395ULL, 0xe3ad39a6fea3b4ddULL, + 0xe6d9580f33930944ULL, 0x102e6cb85b734e0cULL, 0x3fee1732bac414bfULL, 0xc9192385d22453f7ULL, + 0x606ee02779aaa1d9ULL, 0x9699d490114ae691ULL, 0xb959af1af0fdbc22ULL, 0x4fae9bad981dfb6aULL, + 0xdf6f0e0cff77cb15ULL, 0x29983abb97978c5dULL, 0x065841317620d6eeULL, 0xf0af75861ec091a6ULL, + 0x59d8b624b54e6388ULL, 0xaf2f8293ddae24c0ULL, 0x80eff9193c197e73ULL, 0x7618cdae54f9393bULL, + 0x95b5f408aa5a8de6ULL, 0x6342c0bfc2bacaaeULL, 0x4c82bb35230d901dULL, 0xba758f824bedd755ULL, + 0x13024c20e063257bULL, 0xe5f5789788836233ULL, 0xca35031d69343880ULL, 0x3cc237aa01d47fc8ULL, + 0xac03a20b66be4fb7ULL, 0x5af496bc0e5e08ffULL, 0x7534ed36efe9524cULL, 0x83c3d98187091504ULL, + 0x2ab41a232c87e72aULL, 0xdc432e944467a062ULL, 0xf383551ea5d0fad1ULL, 0x057461a9cd30bd99ULL, + 0xf96b964d3fb181e3ULL, 0x0f9ca2fa5751c6abULL, 0x205cd970b6e69c18ULL, 0xd6abedc7de06db50ULL, + 0x7fdc2e657588297eULL, 0x892b1ad21d686e36ULL, 0xa6eb6158fcdf3485ULL, 0x501c55ef943f73cdULL, + 0xc0ddc04ef35543b2ULL, 0x362af4f99bb504faULL, 0x19ea8f737a025e49ULL, 0xef1dbbc412e21901ULL, + 0x466a7866b96ceb2fULL, 0xb09d4cd1d18cac67ULL, 0x9f5d375b303bf6d4ULL, 0x69aa03ec58dbb19cULL, + 0x8a073a4aa6780541ULL, 0x7cf00efdce984209ULL, 0x533075772f2f18baULL, 0xa5c741c047cf5ff2ULL, + 0x0cb08262ec41addcULL, 0xfa47b6d584a1ea94ULL, 0xd587cd5f6516b027ULL, 0x2370f9e80df6f76fULL, + 0xb3b16c496a9cc710ULL, 0x454658fe027c8058ULL, 0x6a862374e3cbdaebULL, 0x9c7117c38b2b9da3ULL, + 0x3506d46120a56f8dULL, 0xc3f1e0d6484528c5ULL, 0xec319b5ca9f27276ULL, 0x1ac6afebc112353eULL, + 0x1fb2ce420c2288a7ULL, 0xe945faf564c2cfefULL, 0xc685817f8575955cULL, 0x3072b5c8ed95d214ULL, + 0x9905766a461b203aULL, 0x6ff242dd2efb6772ULL, 0x40323957cf4c3dc1ULL, 0xb6c50de0a7ac7a89ULL, + 0x26049841c0c64af6ULL, 0xd0f3acf6a8260dbeULL, 0xff33d77c4991570dULL, 0x09c4e3cb21711045ULL, + 0xa0b320698affe26bULL, 0x564414dee21fa523ULL, 0x79846f5403a8ff90ULL, 0x8f735be36b48b8d8ULL, + 0x6cde624595eb0c05ULL, 0x9a2956f2fd0b4b4dULL, 0xb5e92d781cbc11feULL, 0x431e19cf745c56b6ULL, + 0xea69da6ddfd2a498ULL, 0x1c9eeedab732e3d0ULL, 0x335e95505685b963ULL, 0xc5a9a1e73e65fe2bULL, + 0x55683446590fce54ULL, 0xa39f00f131ef891cULL, 0x8c5f7b7bd058d3afULL, 0x7aa84fccb8b894e7ULL, + 0xd3df8c6e133666c9ULL, 0x2528b8d97bd62181ULL, 0x0ae8c3539a617b32ULL, 0xfc1ff7e4f2813c7aULL, + 0xc60e0ac927f490adULL, 0x30f93e7e4f14d7e5ULL, 0x1f3945f4aea38d56ULL, 0xe9ce7143c643ca1eULL, + 0x40b9b2e16dcd3830ULL, 0xb64e8656052d7f78ULL, 0x998efddce49a25cbULL, 0x6f79c96b8c7a6283ULL, + 0xffb85ccaeb1052fcULL, 0x094f687d83f015b4ULL, 0x268f13f762474f07ULL, 0xd07827400aa7084fULL, + 0x790fe4e2a129fa61ULL, 0x8ff8d055c9c9bd29ULL, 0xa038abdf287ee79aULL, 0x56cf9f68409ea0d2ULL, + 0xb562a6cebe3d140fULL, 0x43959279d6dd5347ULL, 0x6c55e9f3376a09f4ULL, 0x9aa2dd445f8a4ebcULL, + 0x33d51ee6f404bc92ULL, 0xc5222a519ce4fbdaULL, 0xeae251db7d53a169ULL, 0x1c15656c15b3e621ULL, + 0x8cd4f0cd72d9d65eULL, 0x7a23c47a1a399116ULL, 0x55e3bff0fb8ecba5ULL, 0xa3148b47936e8cedULL, + 0x0a6348e538e07ec3ULL, 0xfc947c525000398bULL, 0xd35407d8b1b76338ULL, 0x25a3336fd9572470ULL, + 0x20d752c6146799e9ULL, 0xd62066717c87dea1ULL, 0xf9e01dfb9d308412ULL, 0x0f17294cf5d0c35aULL, + 0xa660eaee5e5e3174ULL, 0x5097de5936be763cULL, 0x7f57a5d3d7092c8fULL, 0x89a09164bfe96bc7ULL, + 0x196104c5d8835bb8ULL, 0xef963072b0631cf0ULL, 0xc0564bf851d44643ULL, 0x36a17f4f3934010bULL, + 0x9fd6bced92baf325ULL, 0x6921885afa5ab46dULL, 0x46e1f3d01bedeedeULL, 0xb016c767730da996ULL, + 0x53bbfec18dae1d4bULL, 0xa54cca76e54e5a03ULL, 0x8a8cb1fc04f900b0ULL, 0x7c7b854b6c1947f8ULL, + 0xd50c46e9c797b5d6ULL, 0x23fb725eaf77f29eULL, 0x0c3b09d44ec0a82dULL, 0xfacc3d632620ef65ULL, + 0x6a0da8c2414adf1aULL, 0x9cfa9c7529aa9852ULL, 0xb33ae7ffc81dc2e1ULL, 0x45cdd348a0fd85a9ULL, + 0xecba10ea0b737787ULL, 0x1a4d245d639330cfULL, 0x358d5fd782246a7cULL, 0xc37a6b60eac42d34ULL, + 0x3f659c841845114eULL, 0xc992a83370a55606ULL, 0xe652d3b991120cb5ULL, 0x10a5e70ef9f24bfdULL, + 0xb9d224ac527cb9d3ULL, 0x4f25101b3a9cfe9bULL, 0x60e56b91db2ba428ULL, 0x96125f26b3cbe360ULL, + 0x06d3ca87d4a1d31fULL, 0xf024fe30bc419457ULL, 0xdfe485ba5df6cee4ULL, 0x2913b10d351689acULL, + 0x806472af9e987b82ULL, 0x76934618f6783ccaULL, 0x59533d9217cf6679ULL, 0xafa409257f2f2131ULL, + 0x4c093083818c95ecULL, 0xbafe0434e96cd2a4ULL, 0x953e7fbe08db8817ULL, 0x63c94b09603bcf5fULL, + 0xcabe88abcbb53d71ULL, 0x3c49bc1ca3557a39ULL, 0x1389c79642e2208aULL, 0xe57ef3212a0267c2ULL, + 0x75bf66804d6857bdULL, 0x83485237258810f5ULL, 0xac8829bdc43f4a46ULL, 0x5a7f1d0aacdf0d0eULL, + 0xf308dea80751ff20ULL, 0x05ffea1f6fb1b868ULL, 0x2a3f91958e06e2dbULL, 0xdcc8a522e6e6a593ULL, + 0xd9bcc48b2bd6180aULL, 0x2f4bf03c43365f42ULL, 0x008b8bb6a28105f1ULL, 0xf67cbf01ca6142b9ULL, + 0x5f0b7ca361efb097ULL, 0xa9fc4814090ff7dfULL, 0x863c339ee8b8ad6cULL, 0x70cb07298058ea24ULL, + 0xe00a9288e732da5bULL, 0x16fda63f8fd29d13ULL, 0x393dddb56e65c7a0ULL, 0xcfcae902068580e8ULL, + 0x66bd2aa0ad0b72c6ULL, 0x904a1e17c5eb358eULL, 0xbf8a659d245c6f3dULL, 0x497d512a4cbc2875ULL, + 0xaad0688cb21f9ca8ULL, 0x5c275c3bdaffdbe0ULL, 0x73e727b13b488153ULL, 0x8510130653a8c61bULL, + 0x2c67d0a4f8263435ULL, 0xda90e41390c6737dULL, 0xf5509f99717129ceULL, 0x03a7ab2e19916e86ULL, + 0x93663e8f7efb5ef9ULL, 0x65910a38161b19b1ULL, 0x4a5171b2f7ac4302ULL, 0xbca645059f4c044aULL, + 0x15d186a734c2f664ULL, 0xe326b2105c22b12cULL, 0xcce6c99abd95eb9fULL, 0x3a11fd2dd575acd7ULL, + + 0x0000000000000000ULL, 0x71b0c13da512335dULL, 0xe361827b4a2466baULL, 0x92d14346ef3655e7ULL, + 0xf21a22a5ccdf5e1fULL, 0x83aae39869cd6d42ULL, 0x117ba0de86fb38a5ULL, 0x60cb61e323e90bf8ULL, + 0xd0ed6318c1292f55ULL, 0xa15da225643b1c08ULL, 0x338ce1638b0d49efULL, 0x423c205e2e1f7ab2ULL, + 0x22f741bd0df6714aULL, 0x53478080a8e44217ULL, 0xc196c3c647d217f0ULL, 0xb02602fbe2c024adULL, + 0x9503e062dac5cdc1ULL, 0xe4b3215f7fd7fe9cULL, 0x7662621990e1ab7bULL, 0x07d2a32435f39826ULL, + 0x6719c2c7161a93deULL, 0x16a903fab308a083ULL, 0x847840bc5c3ef564ULL, 0xf5c88181f92cc639ULL, + 0x45ee837a1bece294ULL, 0x345e4247befed1c9ULL, 0xa68f010151c8842eULL, 0xd73fc03cf4dab773ULL, + 0xb7f4a1dfd733bc8bULL, 0xc64460e272218fd6ULL, 0x549523a49d17da31ULL, 0x2525e2993805e96cULL, + 0x1edee696ed1c08e9ULL, 0x6f6e27ab480e3bb4ULL, 0xfdbf64eda7386e53ULL, 0x8c0fa5d0022a5d0eULL, + 0xecc4c43321c356f6ULL, 0x9d74050e84d165abULL, 0x0fa546486be7304cULL, 0x7e158775cef50311ULL, + 0xce33858e2c3527bcULL, 0xbf8344b3892714e1ULL, 0x2d5207f566114106ULL, 0x5ce2c6c8c303725bULL, + 0x3c29a72be0ea79a3ULL, 0x4d99661645f84afeULL, 0xdf482550aace1f19ULL, 0xaef8e46d0fdc2c44ULL, + 0x8bdd06f437d9c528ULL, 0xfa6dc7c992cbf675ULL, 0x68bc848f7dfda392ULL, 0x190c45b2d8ef90cfULL, + 0x79c72451fb069b37ULL, 0x0877e56c5e14a86aULL, 0x9aa6a62ab122fd8dULL, 0xeb1667171430ced0ULL, + 0x5b3065ecf6f0ea7dULL, 0x2a80a4d153e2d920ULL, 0xb851e797bcd48cc7ULL, 0xc9e126aa19c6bf9aULL, + 0xa92a47493a2fb462ULL, 0xd89a86749f3d873fULL, 0x4a4bc532700bd2d8ULL, 0x3bfb040fd519e185ULL, + 0x3dbdcd2dda3811d2ULL, 0x4c0d0c107f2a228fULL, 0xdedc4f56901c7768ULL, 0xaf6c8e6b350e4435ULL, + 0xcfa7ef8816e74fcdULL, 0xbe172eb5b3f57c90ULL, 0x2cc66df35cc32977ULL, 0x5d76accef9d11a2aULL, + 0xed50ae351b113e87ULL, 0x9ce06f08be030ddaULL, 0x0e312c4e5135583dULL, 0x7f81ed73f4276b60ULL, + 0x1f4a8c90d7ce6098ULL, 0x6efa4dad72dc53c5ULL, 0xfc2b0eeb9dea0622ULL, 0x8d9bcfd638f8357fULL, + 0xa8be2d4f00fddc13ULL, 0xd90eec72a5efef4eULL, 0x4bdfaf344ad9baa9ULL, 0x3a6f6e09efcb89f4ULL, + 0x5aa40feacc22820cULL, 0x2b14ced76930b151ULL, 0xb9c58d918606e4b6ULL, 0xc8754cac2314d7ebULL, + 0x78534e57c1d4f346ULL, 0x09e38f6a64c6c01bULL, 0x9b32cc2c8bf095fcULL, 0xea820d112ee2a6a1ULL, + 0x8a496cf20d0bad59ULL, 0xfbf9adcfa8199e04ULL, 0x6928ee89472fcbe3ULL, 0x18982fb4e23df8beULL, + 0x23632bbb3724193bULL, 0x52d3ea8692362a66ULL, 0xc002a9c07d007f81ULL, 0xb1b268fdd8124cdcULL, + 0xd179091efbfb4724ULL, 0xa0c9c8235ee97479ULL, 0x32188b65b1df219eULL, 0x43a84a5814cd12c3ULL, + 0xf38e48a3f60d366eULL, 0x823e899e531f0533ULL, 0x10efcad8bc2950d4ULL, 0x615f0be5193b6389ULL, + 0x01946a063ad26871ULL, 0x7024ab3b9fc05b2cULL, 0xe2f5e87d70f60ecbULL, 0x93452940d5e43d96ULL, + 0xb660cbd9ede1d4faULL, 0xc7d00ae448f3e7a7ULL, 0x550149a2a7c5b240ULL, 0x24b1889f02d7811dULL, + 0x447ae97c213e8ae5ULL, 0x35ca2841842cb9b8ULL, 0xa71b6b076b1aec5fULL, 0xd6abaa3ace08df02ULL, + 0x668da8c12cc8fbafULL, 0x173d69fc89dac8f2ULL, 0x85ec2aba66ec9d15ULL, 0xf45ceb87c3feae48ULL, + 0x94978a64e017a5b0ULL, 0xe5274b59450596edULL, 0x77f6081faa33c30aULL, 0x0646c9220f21f057ULL, + 0x7b7b9a5bb47023a4ULL, 0x0acb5b66116210f9ULL, 0x981a1820fe54451eULL, 0xe9aad91d5b467643ULL, + 0x8961b8fe78af7dbbULL, 0xf8d179c3ddbd4ee6ULL, 0x6a003a85328b1b01ULL, 0x1bb0fbb89799285cULL, + 0xab96f94375590cf1ULL, 0xda26387ed04b3facULL, 0x48f77b383f7d6a4bULL, 0x3947ba059a6f5916ULL, + 0x598cdbe6b98652eeULL, 0x283c1adb1c9461b3ULL, 0xbaed599df3a23454ULL, 0xcb5d98a056b00709ULL, + 0xee787a396eb5ee65ULL, 0x9fc8bb04cba7dd38ULL, 0x0d19f842249188dfULL, 0x7ca9397f8183bb82ULL, + 0x1c62589ca26ab07aULL, 0x6dd299a107788327ULL, 0xff03dae7e84ed6c0ULL, 0x8eb31bda4d5ce59dULL, + 0x3e951921af9cc130ULL, 0x4f25d81c0a8ef26dULL, 0xddf49b5ae5b8a78aULL, 0xac445a6740aa94d7ULL, + 0xcc8f3b8463439f2fULL, 0xbd3ffab9c651ac72ULL, 0x2feeb9ff2967f995ULL, 0x5e5e78c28c75cac8ULL, + 0x65a57ccd596c2b4dULL, 0x1415bdf0fc7e1810ULL, 0x86c4feb613484df7ULL, 0xf7743f8bb65a7eaaULL, + 0x97bf5e6895b37552ULL, 0xe60f9f5530a1460fULL, 0x74dedc13df9713e8ULL, 0x056e1d2e7a8520b5ULL, + 0xb5481fd598450418ULL, 0xc4f8dee83d573745ULL, 0x56299daed26162a2ULL, 0x27995c93777351ffULL, + 0x47523d70549a5a07ULL, 0x36e2fc4df188695aULL, 0xa433bf0b1ebe3cbdULL, 0xd5837e36bbac0fe0ULL, + 0xf0a69caf83a9e68cULL, 0x81165d9226bbd5d1ULL, 0x13c71ed4c98d8036ULL, 0x6277dfe96c9fb36bULL, + 0x02bcbe0a4f76b893ULL, 0x730c7f37ea648bceULL, 0xe1dd3c710552de29ULL, 0x906dfd4ca040ed74ULL, + 0x204bffb74280c9d9ULL, 0x51fb3e8ae792fa84ULL, 0xc32a7dcc08a4af63ULL, 0xb29abcf1adb69c3eULL, + 0xd251dd128e5f97c6ULL, 0xa3e11c2f2b4da49bULL, 0x31305f69c47bf17cULL, 0x40809e546169c221ULL, + 0x46c657766e483276ULL, 0x3776964bcb5a012bULL, 0xa5a7d50d246c54ccULL, 0xd4171430817e6791ULL, + 0xb4dc75d3a2976c69ULL, 0xc56cb4ee07855f34ULL, 0x57bdf7a8e8b30ad3ULL, 0x260d36954da1398eULL, + 0x962b346eaf611d23ULL, 0xe79bf5530a732e7eULL, 0x754ab615e5457b99ULL, 0x04fa7728405748c4ULL, + 0x643116cb63be433cULL, 0x1581d7f6c6ac7061ULL, 0x875094b0299a2586ULL, 0xf6e0558d8c8816dbULL, + 0xd3c5b714b48dffb7ULL, 0xa2757629119fcceaULL, 0x30a4356ffea9990dULL, 0x4114f4525bbbaa50ULL, + 0x21df95b17852a1a8ULL, 0x506f548cdd4092f5ULL, 0xc2be17ca3276c712ULL, 0xb30ed6f79764f44fULL, + 0x0328d40c75a4d0e2ULL, 0x72981531d0b6e3bfULL, 0xe04956773f80b658ULL, 0x91f9974a9a928505ULL, + 0xf132f6a9b97b8efdULL, 0x808237941c69bda0ULL, 0x125374d2f35fe847ULL, 0x63e3b5ef564ddb1aULL, + 0x5818b1e083543a9fULL, 0x29a870dd264609c2ULL, 0xbb79339bc9705c25ULL, 0xcac9f2a66c626f78ULL, + 0xaa0293454f8b6480ULL, 0xdbb25278ea9957ddULL, 0x4963113e05af023aULL, 0x38d3d003a0bd3167ULL, + 0x88f5d2f8427d15caULL, 0xf94513c5e76f2697ULL, 0x6b94508308597370ULL, 0x1a2491bead4b402dULL, + 0x7aeff05d8ea24bd5ULL, 0x0b5f31602bb07888ULL, 0x998e7226c4862d6fULL, 0xe83eb31b61941e32ULL, + 0xcd1b51825991f75eULL, 0xbcab90bffc83c403ULL, 0x2e7ad3f913b591e4ULL, 0x5fca12c4b6a7a2b9ULL, + 0x3f017327954ea941ULL, 0x4eb1b21a305c9a1cULL, 0xdc60f15cdf6acffbULL, 0xadd030617a78fca6ULL, + 0x1df6329a98b8d80bULL, 0x6c46f3a73daaeb56ULL, 0xfe97b0e1d29cbeb1ULL, 0x8f2771dc778e8decULL, + 0xefec103f54678614ULL, 0x9e5cd102f175b549ULL, 0x0c8d92441e43e0aeULL, 0x7d3d5379bb51d3f3ULL, + + 0x0000000000000000ULL, 0xbfdb6c480f15915eULL, 0x4b6ffec346bcb1d7ULL, 0xf4b4928b49a92089ULL, + 0x96dffd868d7963aeULL, 0x290491ce826cf2f0ULL, 0xddb00345cbc5d279ULL, 0x626b6f0dc4d04327ULL, + 0x1966dd5e42655437ULL, 0xa6bdb1164d70c569ULL, 0x5209239d04d9e5e0ULL, 0xedd24fd50bcc74beULL, + 0x8fb920d8cf1c3799ULL, 0x30624c90c009a6c7ULL, 0xc4d6de1b89a0864eULL, 0x7b0db25386b51710ULL, + 0x32cdbabc84caa86eULL, 0x8d16d6f48bdf3930ULL, 0x79a2447fc27619b9ULL, 0xc6792837cd6388e7ULL, + 0xa412473a09b3cbc0ULL, 0x1bc92b7206a65a9eULL, 0xef7db9f94f0f7a17ULL, 0x50a6d5b1401aeb49ULL, + 0x2bab67e2c6affc59ULL, 0x94700baac9ba6d07ULL, 0x60c4992180134d8eULL, 0xdf1ff5698f06dcd0ULL, + 0xbd749a644bd69ff7ULL, 0x02aff62c44c30ea9ULL, 0xf61b64a70d6a2e20ULL, 0x49c008ef027fbf7eULL, + 0x659b7579099550dcULL, 0xda4019310680c182ULL, 0x2ef48bba4f29e10bULL, 0x912fe7f2403c7055ULL, + 0xf34488ff84ec3372ULL, 0x4c9fe4b78bf9a22cULL, 0xb82b763cc25082a5ULL, 0x07f01a74cd4513fbULL, + 0x7cfda8274bf004ebULL, 0xc326c46f44e595b5ULL, 0x379256e40d4cb53cULL, 0x88493aac02592462ULL, + 0xea2255a1c6896745ULL, 0x55f939e9c99cf61bULL, 0xa14dab628035d692ULL, 0x1e96c72a8f2047ccULL, + 0x5756cfc58d5ff8b2ULL, 0xe88da38d824a69ecULL, 0x1c393106cbe34965ULL, 0xa3e25d4ec4f6d83bULL, + 0xc189324300269b1cULL, 0x7e525e0b0f330a42ULL, 0x8ae6cc80469a2acbULL, 0x353da0c8498fbb95ULL, + 0x4e30129bcf3aac85ULL, 0xf1eb7ed3c02f3ddbULL, 0x055fec5889861d52ULL, 0xba84801086938c0cULL, + 0xd8efef1d4243cf2bULL, 0x673483554d565e75ULL, 0x938011de04ff7efcULL, 0x2c5b7d960beaefa2ULL, + 0xcb36eaf2132aa1b8ULL, 0x74ed86ba1c3f30e6ULL, 0x805914315596106fULL, 0x3f8278795a838131ULL, + 0x5de917749e53c216ULL, 0xe2327b3c91465348ULL, 0x1686e9b7d8ef73c1ULL, 0xa95d85ffd7fae29fULL, + 0xd25037ac514ff58fULL, 0x6d8b5be45e5a64d1ULL, 0x993fc96f17f34458ULL, 0x26e4a52718e6d506ULL, + 0x448fca2adc369621ULL, 0xfb54a662d323077fULL, 0x0fe034e99a8a27f6ULL, 0xb03b58a1959fb6a8ULL, + 0xf9fb504e97e009d6ULL, 0x46203c0698f59888ULL, 0xb294ae8dd15cb801ULL, 0x0d4fc2c5de49295fULL, + 0x6f24adc81a996a78ULL, 0xd0ffc180158cfb26ULL, 0x244b530b5c25dbafULL, 0x9b903f4353304af1ULL, + 0xe09d8d10d5855de1ULL, 0x5f46e158da90ccbfULL, 0xabf273d39339ec36ULL, 0x14291f9b9c2c7d68ULL, + 0x7642709658fc3e4fULL, 0xc9991cde57e9af11ULL, 0x3d2d8e551e408f98ULL, 0x82f6e21d11551ec6ULL, + 0xaead9f8b1abff164ULL, 0x1176f3c315aa603aULL, 0xe5c261485c0340b3ULL, 0x5a190d005316d1edULL, + 0x3872620d97c692caULL, 0x87a90e4598d30394ULL, 0x731d9cced17a231dULL, 0xccc6f086de6fb243ULL, + 0xb7cb42d558daa553ULL, 0x08102e9d57cf340dULL, 0xfca4bc161e661484ULL, 0x437fd05e117385daULL, + 0x2114bf53d5a3c6fdULL, 0x9ecfd31bdab657a3ULL, 0x6a7b4190931f772aULL, 0xd5a02dd89c0ae674ULL, + 0x9c6025379e75590aULL, 0x23bb497f9160c854ULL, 0xd70fdbf4d8c9e8ddULL, 0x68d4b7bcd7dc7983ULL, + 0x0abfd8b1130c3aa4ULL, 0xb564b4f91c19abfaULL, 0x41d0267255b08b73ULL, 0xfe0b4a3a5aa51a2dULL, + 0x8506f869dc100d3dULL, 0x3add9421d3059c63ULL, 0xce6906aa9aacbceaULL, 0x71b26ae295b92db4ULL, + 0x13d905ef51696e93ULL, 0xac0269a75e7cffcdULL, 0x58b6fb2c17d5df44ULL, 0xe76d976418c04e1aULL, + 0xa2b4f3b77ec2d01bULL, 0x1d6f9fff71d74145ULL, 0xe9db0d74387e61ccULL, 0x5600613c376bf092ULL, + 0x346b0e31f3bbb3b5ULL, 0x8bb06279fcae22ebULL, 0x7f04f0f2b5070262ULL, 0xc0df9cbaba12933cULL, + 0xbbd22ee93ca7842cULL, 0x040942a133b21572ULL, 0xf0bdd02a7a1b35fbULL, 0x4f66bc62750ea4a5ULL, + 0x2d0dd36fb1dee782ULL, 0x92d6bf27becb76dcULL, 0x66622dacf7625655ULL, 0xd9b941e4f877c70bULL, + 0x9079490bfa087875ULL, 0x2fa22543f51de92bULL, 0xdb16b7c8bcb4c9a2ULL, 0x64cddb80b3a158fcULL, + 0x06a6b48d77711bdbULL, 0xb97dd8c578648a85ULL, 0x4dc94a4e31cdaa0cULL, 0xf21226063ed83b52ULL, + 0x891f9455b86d2c42ULL, 0x36c4f81db778bd1cULL, 0xc2706a96fed19d95ULL, 0x7dab06def1c40ccbULL, + 0x1fc069d335144fecULL, 0xa01b059b3a01deb2ULL, 0x54af971073a8fe3bULL, 0xeb74fb587cbd6f65ULL, + 0xc72f86ce775780c7ULL, 0x78f4ea8678421199ULL, 0x8c40780d31eb3110ULL, 0x339b14453efea04eULL, + 0x51f07b48fa2ee369ULL, 0xee2b1700f53b7237ULL, 0x1a9f858bbc9252beULL, 0xa544e9c3b387c3e0ULL, + 0xde495b903532d4f0ULL, 0x619237d83a2745aeULL, 0x9526a553738e6527ULL, 0x2afdc91b7c9bf479ULL, + 0x4896a616b84bb75eULL, 0xf74dca5eb75e2600ULL, 0x03f958d5fef70689ULL, 0xbc22349df1e297d7ULL, + 0xf5e23c72f39d28a9ULL, 0x4a39503afc88b9f7ULL, 0xbe8dc2b1b521997eULL, 0x0156aef9ba340820ULL, + 0x633dc1f47ee44b07ULL, 0xdce6adbc71f1da59ULL, 0x28523f373858fad0ULL, 0x9789537f374d6b8eULL, + 0xec84e12cb1f87c9eULL, 0x535f8d64beededc0ULL, 0xa7eb1feff744cd49ULL, 0x183073a7f8515c17ULL, + 0x7a5b1caa3c811f30ULL, 0xc58070e233948e6eULL, 0x3134e2697a3daee7ULL, 0x8eef8e2175283fb9ULL, + 0x698219456de871a3ULL, 0xd659750d62fde0fdULL, 0x22ede7862b54c074ULL, 0x9d368bce2441512aULL, + 0xff5de4c3e091120dULL, 0x4086888bef848353ULL, 0xb4321a00a62da3daULL, 0x0be97648a9383284ULL, + 0x70e4c41b2f8d2594ULL, 0xcf3fa8532098b4caULL, 0x3b8b3ad869319443ULL, 0x845056906624051dULL, + 0xe63b399da2f4463aULL, 0x59e055d5ade1d764ULL, 0xad54c75ee448f7edULL, 0x128fab16eb5d66b3ULL, + 0x5b4fa3f9e922d9cdULL, 0xe494cfb1e6374893ULL, 0x10205d3aaf9e681aULL, 0xaffb3172a08bf944ULL, + 0xcd905e7f645bba63ULL, 0x724b32376b4e2b3dULL, 0x86ffa0bc22e70bb4ULL, 0x3924ccf42df29aeaULL, + 0x42297ea7ab478dfaULL, 0xfdf212efa4521ca4ULL, 0x09468064edfb3c2dULL, 0xb69dec2ce2eead73ULL, + 0xd4f68321263eee54ULL, 0x6b2def69292b7f0aULL, 0x9f997de260825f83ULL, 0x204211aa6f97ceddULL, + 0x0c196c3c647d217fULL, 0xb3c200746b68b021ULL, 0x477692ff22c190a8ULL, 0xf8adfeb72dd401f6ULL, + 0x9ac691bae90442d1ULL, 0x251dfdf2e611d38fULL, 0xd1a96f79afb8f306ULL, 0x6e720331a0ad6258ULL, + 0x157fb16226187548ULL, 0xaaa4dd2a290de416ULL, 0x5e104fa160a4c49fULL, 0xe1cb23e96fb155c1ULL, + 0x83a04ce4ab6116e6ULL, 0x3c7b20aca47487b8ULL, 0xc8cfb227eddda731ULL, 0x7714de6fe2c8366fULL, + 0x3ed4d680e0b78911ULL, 0x810fbac8efa2184fULL, 0x75bb2843a60b38c6ULL, 0xca60440ba91ea998ULL, + 0xa80b2b066dceeabfULL, 0x17d0474e62db7be1ULL, 0xe364d5c52b725b68ULL, 0x5cbfb98d2467ca36ULL, + 0x27b20bdea2d2dd26ULL, 0x98696796adc74c78ULL, 0x6cddf51de46e6cf1ULL, 0xd3069955eb7bfdafULL, + 0xb16df6582fabbe88ULL, 0x0eb69a1020be2fd6ULL, 0xfa02089b69170f5fULL, 0x45d964d366029e01ULL, + + 0x0000000000000000ULL, 0x3ea616bd2ae10d77ULL, 0x7d4c2d7a55c21aeeULL, 0x43ea3bc77f231799ULL, + 0xfa985af4ab8435dcULL, 0xc43e4c49816538abULL, 0x87d4778efe462f32ULL, 0xb9726133d4a72245ULL, + 0xc1e993ba0f9ff8d3ULL, 0xff4f8507257ef5a4ULL, 0xbca5bec05a5de23dULL, 0x8203a87d70bcef4aULL, + 0x3b71c94ea41bcd0fULL, 0x05d7dff38efac078ULL, 0x463de434f1d9d7e1ULL, 0x789bf289db38da96ULL, + 0xb70a012747a862cdULL, 0x89ac179a6d496fbaULL, 0xca462c5d126a7823ULL, 0xf4e03ae0388b7554ULL, + 0x4d925bd3ec2c5711ULL, 0x73344d6ec6cd5a66ULL, 0x30de76a9b9ee4dffULL, 0x0e786014930f4088ULL, + 0x76e3929d48379a1eULL, 0x4845842062d69769ULL, 0x0bafbfe71df580f0ULL, 0x3509a95a37148d87ULL, + 0x8c7bc869e3b3afc2ULL, 0xb2ddded4c952a2b5ULL, 0xf137e513b671b52cULL, 0xcf91f3ae9c90b85bULL, + 0x5acd241dd7c756f1ULL, 0x646b32a0fd265b86ULL, 0x2781096782054c1fULL, 0x19271fdaa8e44168ULL, + 0xa0557ee97c43632dULL, 0x9ef3685456a26e5aULL, 0xdd195393298179c3ULL, 0xe3bf452e036074b4ULL, + 0x9b24b7a7d858ae22ULL, 0xa582a11af2b9a355ULL, 0xe6689add8d9ab4ccULL, 0xd8ce8c60a77bb9bbULL, + 0x61bced5373dc9bfeULL, 0x5f1afbee593d9689ULL, 0x1cf0c029261e8110ULL, 0x2256d6940cff8c67ULL, + 0xedc7253a906f343cULL, 0xd3613387ba8e394bULL, 0x908b0840c5ad2ed2ULL, 0xae2d1efdef4c23a5ULL, + 0x175f7fce3beb01e0ULL, 0x29f96973110a0c97ULL, 0x6a1352b46e291b0eULL, 0x54b5440944c81679ULL, + 0x2c2eb6809ff0ccefULL, 0x1288a03db511c198ULL, 0x51629bfaca32d601ULL, 0x6fc48d47e0d3db76ULL, + 0xd6b6ec743474f933ULL, 0xe810fac91e95f444ULL, 0xabfac10e61b6e3ddULL, 0x955cd7b34b57eeaaULL, + 0xb59a483baf8eade2ULL, 0x8b3c5e86856fa095ULL, 0xc8d66541fa4cb70cULL, 0xf67073fcd0adba7bULL, + 0x4f0212cf040a983eULL, 0x71a404722eeb9549ULL, 0x324e3fb551c882d0ULL, 0x0ce829087b298fa7ULL, + 0x7473db81a0115531ULL, 0x4ad5cd3c8af05846ULL, 0x093ff6fbf5d34fdfULL, 0x3799e046df3242a8ULL, + 0x8eeb81750b9560edULL, 0xb04d97c821746d9aULL, 0xf3a7ac0f5e577a03ULL, 0xcd01bab274b67774ULL, + 0x0290491ce826cf2fULL, 0x3c365fa1c2c7c258ULL, 0x7fdc6466bde4d5c1ULL, 0x417a72db9705d8b6ULL, + 0xf80813e843a2faf3ULL, 0xc6ae05556943f784ULL, 0x85443e921660e01dULL, 0xbbe2282f3c81ed6aULL, + 0xc379daa6e7b937fcULL, 0xfddfcc1bcd583a8bULL, 0xbe35f7dcb27b2d12ULL, 0x8093e161989a2065ULL, + 0x39e180524c3d0220ULL, 0x074796ef66dc0f57ULL, 0x44adad2819ff18ceULL, 0x7a0bbb95331e15b9ULL, + 0xef576c267849fb13ULL, 0xd1f17a9b52a8f664ULL, 0x921b415c2d8be1fdULL, 0xacbd57e1076aec8aULL, + 0x15cf36d2d3cdcecfULL, 0x2b69206ff92cc3b8ULL, 0x68831ba8860fd421ULL, 0x56250d15aceed956ULL, + 0x2ebeff9c77d603c0ULL, 0x1018e9215d370eb7ULL, 0x53f2d2e62214192eULL, 0x6d54c45b08f51459ULL, + 0xd426a568dc52361cULL, 0xea80b3d5f6b33b6bULL, 0xa96a881289902cf2ULL, 0x97cc9eafa3712185ULL, + 0x585d6d013fe199deULL, 0x66fb7bbc150094a9ULL, 0x2511407b6a238330ULL, 0x1bb756c640c28e47ULL, + 0xa2c537f59465ac02ULL, 0x9c632148be84a175ULL, 0xdf891a8fc1a7b6ecULL, 0xe12f0c32eb46bb9bULL, + 0x99b4febb307e610dULL, 0xa712e8061a9f6c7aULL, 0xe4f8d3c165bc7be3ULL, 0xda5ec57c4f5d7694ULL, + 0x632ca44f9bfa54d1ULL, 0x5d8ab2f2b11b59a6ULL, 0x1e608935ce384e3fULL, 0x20c69f88e4d94348ULL, + 0x5fedb624078ac8afULL, 0x614ba0992d6bc5d8ULL, 0x22a19b5e5248d241ULL, 0x1c078de378a9df36ULL, + 0xa575ecd0ac0efd73ULL, 0x9bd3fa6d86eff004ULL, 0xd839c1aaf9cce79dULL, 0xe69fd717d32deaeaULL, + 0x9e04259e0815307cULL, 0xa0a2332322f43d0bULL, 0xe34808e45dd72a92ULL, 0xddee1e59773627e5ULL, + 0x649c7f6aa39105a0ULL, 0x5a3a69d7897008d7ULL, 0x19d05210f6531f4eULL, 0x277644addcb21239ULL, + 0xe8e7b7034022aa62ULL, 0xd641a1be6ac3a715ULL, 0x95ab9a7915e0b08cULL, 0xab0d8cc43f01bdfbULL, + 0x127fedf7eba69fbeULL, 0x2cd9fb4ac14792c9ULL, 0x6f33c08dbe648550ULL, 0x5195d63094858827ULL, + 0x290e24b94fbd52b1ULL, 0x17a83204655c5fc6ULL, 0x544209c31a7f485fULL, 0x6ae41f7e309e4528ULL, + 0xd3967e4de439676dULL, 0xed3068f0ced86a1aULL, 0xaeda5337b1fb7d83ULL, 0x907c458a9b1a70f4ULL, + 0x05209239d04d9e5eULL, 0x3b868484faac9329ULL, 0x786cbf43858f84b0ULL, 0x46caa9feaf6e89c7ULL, + 0xffb8c8cd7bc9ab82ULL, 0xc11ede705128a6f5ULL, 0x82f4e5b72e0bb16cULL, 0xbc52f30a04eabc1bULL, + 0xc4c90183dfd2668dULL, 0xfa6f173ef5336bfaULL, 0xb9852cf98a107c63ULL, 0x87233a44a0f17114ULL, + 0x3e515b7774565351ULL, 0x00f74dca5eb75e26ULL, 0x431d760d219449bfULL, 0x7dbb60b00b7544c8ULL, + 0xb22a931e97e5fc93ULL, 0x8c8c85a3bd04f1e4ULL, 0xcf66be64c227e67dULL, 0xf1c0a8d9e8c6eb0aULL, + 0x48b2c9ea3c61c94fULL, 0x7614df571680c438ULL, 0x35fee49069a3d3a1ULL, 0x0b58f22d4342ded6ULL, + 0x73c300a4987a0440ULL, 0x4d651619b29b0937ULL, 0x0e8f2ddecdb81eaeULL, 0x30293b63e75913d9ULL, + 0x895b5a5033fe319cULL, 0xb7fd4ced191f3cebULL, 0xf417772a663c2b72ULL, 0xcab161974cdd2605ULL, + 0xea77fe1fa804654dULL, 0xd4d1e8a282e5683aULL, 0x973bd365fdc67fa3ULL, 0xa99dc5d8d72772d4ULL, + 0x10efa4eb03805091ULL, 0x2e49b25629615de6ULL, 0x6da3899156424a7fULL, 0x53059f2c7ca34708ULL, + 0x2b9e6da5a79b9d9eULL, 0x15387b188d7a90e9ULL, 0x56d240dff2598770ULL, 0x68745662d8b88a07ULL, + 0xd10637510c1fa842ULL, 0xefa021ec26fea535ULL, 0xac4a1a2b59ddb2acULL, 0x92ec0c96733cbfdbULL, + 0x5d7dff38efac0780ULL, 0x63dbe985c54d0af7ULL, 0x2031d242ba6e1d6eULL, 0x1e97c4ff908f1019ULL, + 0xa7e5a5cc4428325cULL, 0x9943b3716ec93f2bULL, 0xdaa988b611ea28b2ULL, 0xe40f9e0b3b0b25c5ULL, + 0x9c946c82e033ff53ULL, 0xa2327a3fcad2f224ULL, 0xe1d841f8b5f1e5bdULL, 0xdf7e57459f10e8caULL, + 0x660c36764bb7ca8fULL, 0x58aa20cb6156c7f8ULL, 0x1b401b0c1e75d061ULL, 0x25e60db13494dd16ULL, + 0xb0bada027fc333bcULL, 0x8e1cccbf55223ecbULL, 0xcdf6f7782a012952ULL, 0xf350e1c500e02425ULL, + 0x4a2280f6d4470660ULL, 0x7484964bfea60b17ULL, 0x376ead8c81851c8eULL, 0x09c8bb31ab6411f9ULL, + 0x715349b8705ccb6fULL, 0x4ff55f055abdc618ULL, 0x0c1f64c2259ed181ULL, 0x32b9727f0f7fdcf6ULL, + 0x8bcb134cdbd8feb3ULL, 0xb56d05f1f139f3c4ULL, 0xf6873e368e1ae45dULL, 0xc821288ba4fbe92aULL, + 0x07b0db25386b5171ULL, 0x3916cd98128a5c06ULL, 0x7afcf65f6da94b9fULL, 0x445ae0e2474846e8ULL, + 0xfd2881d193ef64adULL, 0xc38e976cb90e69daULL, 0x8064acabc62d7e43ULL, 0xbec2ba16eccc7334ULL, + 0xc659489f37f4a9a2ULL, 0xf8ff5e221d15a4d5ULL, 0xbb1565e56236b34cULL, 0x85b3735848d7be3bULL, + 0x3cc1126b9c709c7eULL, 0x026704d6b6919109ULL, 0x418d3f11c9b28690ULL, 0x7f2b29ace3538be7ULL, + + 0x0000000000000000ULL, 0x169489cc969951e5ULL, 0x2d2913992d32a3caULL, 0x3bbd9a55bbabf22fULL, + 0x5a5227325a654794ULL, 0x4cc6aefeccfc1671ULL, 0x777b34ab7757e45eULL, 0x61efbd67e1ceb5bbULL, + 0xb4a44e64b4ca8f28ULL, 0xa230c7a82253decdULL, 0x998d5dfd99f82ce2ULL, 0x8f19d4310f617d07ULL, + 0xeef66956eeafc8bcULL, 0xf862e09a78369959ULL, 0xc3df7acfc39d6b76ULL, 0xd54bf30355043a93ULL, + 0x5d91ba9a31028d3bULL, 0x4b053356a79bdcdeULL, 0x70b8a9031c302ef1ULL, 0x662c20cf8aa97f14ULL, + 0x07c39da86b67caafULL, 0x11571464fdfe9b4aULL, 0x2aea8e3146556965ULL, 0x3c7e07fdd0cc3880ULL, + 0xe935f4fe85c80213ULL, 0xffa17d32135153f6ULL, 0xc41ce767a8faa1d9ULL, 0xd2886eab3e63f03cULL, + 0xb367d3ccdfad4587ULL, 0xa5f35a0049341462ULL, 0x9e4ec055f29fe64dULL, 0x88da49996406b7a8ULL, + 0xbb23753462051a76ULL, 0xadb7fcf8f49c4b93ULL, 0x960a66ad4f37b9bcULL, 0x809eef61d9aee859ULL, + 0xe171520638605de2ULL, 0xf7e5dbcaaef90c07ULL, 0xcc58419f1552fe28ULL, 0xdaccc85383cbafcdULL, + 0x0f873b50d6cf955eULL, 0x1913b29c4056c4bbULL, 0x22ae28c9fbfd3694ULL, 0x343aa1056d646771ULL, + 0x55d51c628caad2caULL, 0x434195ae1a33832fULL, 0x78fc0ffba1987100ULL, 0x6e688637370120e5ULL, + 0xe6b2cfae5307974dULL, 0xf0264662c59ec6a8ULL, 0xcb9bdc377e353487ULL, 0xdd0f55fbe8ac6562ULL, + 0xbce0e89c0962d0d9ULL, 0xaa7461509ffb813cULL, 0x91c9fb0524507313ULL, 0x875d72c9b2c922f6ULL, + 0x521681cae7cd1865ULL, 0x4482080671544980ULL, 0x7f3f9253caffbbafULL, 0x69ab1b9f5c66ea4aULL, + 0x0844a6f8bda85ff1ULL, 0x1ed02f342b310e14ULL, 0x256db561909afc3bULL, 0x33f93cad0603addeULL, + 0x429fcc3b9c9da787ULL, 0x540b45f70a04f662ULL, 0x6fb6dfa2b1af044dULL, 0x7922566e273655a8ULL, + 0x18cdeb09c6f8e013ULL, 0x0e5962c55061b1f6ULL, 0x35e4f890ebca43d9ULL, 0x2370715c7d53123cULL, + 0xf63b825f285728afULL, 0xe0af0b93bece794aULL, 0xdb1291c605658b65ULL, 0xcd86180a93fcda80ULL, + 0xac69a56d72326f3bULL, 0xbafd2ca1e4ab3edeULL, 0x8140b6f45f00ccf1ULL, 0x97d43f38c9999d14ULL, + 0x1f0e76a1ad9f2abcULL, 0x099aff6d3b067b59ULL, 0x3227653880ad8976ULL, 0x24b3ecf41634d893ULL, + 0x455c5193f7fa6d28ULL, 0x53c8d85f61633ccdULL, 0x6875420adac8cee2ULL, 0x7ee1cbc64c519f07ULL, + 0xabaa38c51955a594ULL, 0xbd3eb1098fccf471ULL, 0x86832b5c3467065eULL, 0x9017a290a2fe57bbULL, + 0xf1f81ff74330e200ULL, 0xe76c963bd5a9b3e5ULL, 0xdcd10c6e6e0241caULL, 0xca4585a2f89b102fULL, + 0xf9bcb90ffe98bdf1ULL, 0xef2830c36801ec14ULL, 0xd495aa96d3aa1e3bULL, 0xc201235a45334fdeULL, + 0xa3ee9e3da4fdfa65ULL, 0xb57a17f13264ab80ULL, 0x8ec78da489cf59afULL, 0x985304681f56084aULL, + 0x4d18f76b4a5232d9ULL, 0x5b8c7ea7dccb633cULL, 0x6031e4f267609113ULL, 0x76a56d3ef1f9c0f6ULL, + 0x174ad0591037754dULL, 0x01de599586ae24a8ULL, 0x3a63c3c03d05d687ULL, 0x2cf74a0cab9c8762ULL, + 0xa42d0395cf9a30caULL, 0xb2b98a595903612fULL, 0x8904100ce2a89300ULL, 0x9f9099c07431c2e5ULL, + 0xfe7f24a795ff775eULL, 0xe8ebad6b036626bbULL, 0xd356373eb8cdd494ULL, 0xc5c2bef22e548571ULL, + 0x10894df17b50bfe2ULL, 0x061dc43dedc9ee07ULL, 0x3da05e6856621c28ULL, 0x2b34d7a4c0fb4dcdULL, + 0x4adb6ac32135f876ULL, 0x5c4fe30fb7aca993ULL, 0x67f2795a0c075bbcULL, 0x7166f0969a9e0a59ULL, + 0x853f9877393b4f0eULL, 0x93ab11bbafa21eebULL, 0xa8168bee1409ecc4ULL, 0xbe8202228290bd21ULL, + 0xdf6dbf45635e089aULL, 0xc9f93689f5c7597fULL, 0xf244acdc4e6cab50ULL, 0xe4d02510d8f5fab5ULL, + 0x319bd6138df1c026ULL, 0x270f5fdf1b6891c3ULL, 0x1cb2c58aa0c363ecULL, 0x0a264c46365a3209ULL, + 0x6bc9f121d79487b2ULL, 0x7d5d78ed410dd657ULL, 0x46e0e2b8faa62478ULL, 0x50746b746c3f759dULL, + 0xd8ae22ed0839c235ULL, 0xce3aab219ea093d0ULL, 0xf5873174250b61ffULL, 0xe313b8b8b392301aULL, + 0x82fc05df525c85a1ULL, 0x94688c13c4c5d444ULL, 0xafd516467f6e266bULL, 0xb9419f8ae9f7778eULL, + 0x6c0a6c89bcf34d1dULL, 0x7a9ee5452a6a1cf8ULL, 0x41237f1091c1eed7ULL, 0x57b7f6dc0758bf32ULL, + 0x36584bbbe6960a89ULL, 0x20ccc277700f5b6cULL, 0x1b715822cba4a943ULL, 0x0de5d1ee5d3df8a6ULL, + 0x3e1ced435b3e5578ULL, 0x2888648fcda7049dULL, 0x1335feda760cf6b2ULL, 0x05a17716e095a757ULL, + 0x644eca71015b12ecULL, 0x72da43bd97c24309ULL, 0x4967d9e82c69b126ULL, 0x5ff35024baf0e0c3ULL, + 0x8ab8a327eff4da50ULL, 0x9c2c2aeb796d8bb5ULL, 0xa791b0bec2c6799aULL, 0xb1053972545f287fULL, + 0xd0ea8415b5919dc4ULL, 0xc67e0dd92308cc21ULL, 0xfdc3978c98a33e0eULL, 0xeb571e400e3a6febULL, + 0x638d57d96a3cd843ULL, 0x7519de15fca589a6ULL, 0x4ea44440470e7b89ULL, 0x5830cd8cd1972a6cULL, + 0x39df70eb30599fd7ULL, 0x2f4bf927a6c0ce32ULL, 0x14f663721d6b3c1dULL, 0x0262eabe8bf26df8ULL, + 0xd72919bddef6576bULL, 0xc1bd9071486f068eULL, 0xfa000a24f3c4f4a1ULL, 0xec9483e8655da544ULL, + 0x8d7b3e8f849310ffULL, 0x9befb743120a411aULL, 0xa0522d16a9a1b335ULL, 0xb6c6a4da3f38e2d0ULL, + 0xc7a0544ca5a6e889ULL, 0xd134dd80333fb96cULL, 0xea8947d588944b43ULL, 0xfc1dce191e0d1aa6ULL, + 0x9df2737effc3af1dULL, 0x8b66fab2695afef8ULL, 0xb0db60e7d2f10cd7ULL, 0xa64fe92b44685d32ULL, + 0x73041a28116c67a1ULL, 0x659093e487f53644ULL, 0x5e2d09b13c5ec46bULL, 0x48b9807daac7958eULL, + 0x29563d1a4b092035ULL, 0x3fc2b4d6dd9071d0ULL, 0x047f2e83663b83ffULL, 0x12eba74ff0a2d21aULL, + 0x9a31eed694a465b2ULL, 0x8ca5671a023d3457ULL, 0xb718fd4fb996c678ULL, 0xa18c74832f0f979dULL, + 0xc063c9e4cec12226ULL, 0xd6f74028585873c3ULL, 0xed4ada7de3f381ecULL, 0xfbde53b1756ad009ULL, + 0x2e95a0b2206eea9aULL, 0x3801297eb6f7bb7fULL, 0x03bcb32b0d5c4950ULL, 0x15283ae79bc518b5ULL, + 0x74c787807a0bad0eULL, 0x62530e4cec92fcebULL, 0x59ee941957390ec4ULL, 0x4f7a1dd5c1a05f21ULL, + 0x7c832178c7a3f2ffULL, 0x6a17a8b4513aa31aULL, 0x51aa32e1ea915135ULL, 0x473ebb2d7c0800d0ULL, + 0x26d1064a9dc6b56bULL, 0x30458f860b5fe48eULL, 0x0bf815d3b0f416a1ULL, 0x1d6c9c1f266d4744ULL, + 0xc8276f1c73697dd7ULL, 0xdeb3e6d0e5f02c32ULL, 0xe50e7c855e5bde1dULL, 0xf39af549c8c28ff8ULL, + 0x9275482e290c3a43ULL, 0x84e1c1e2bf956ba6ULL, 0xbf5c5bb7043e9989ULL, 0xa9c8d27b92a7c86cULL, + 0x21129be2f6a17fc4ULL, 0x3786122e60382e21ULL, 0x0c3b887bdb93dc0eULL, 0x1aaf01b74d0a8debULL, + 0x7b40bcd0acc43850ULL, 0x6dd4351c3a5d69b5ULL, 0x5669af4981f69b9aULL, 0x40fd2685176fca7fULL, + 0x95b6d586426bf0ecULL, 0x83225c4ad4f2a109ULL, 0xb89fc61f6f595326ULL, 0xae0b4fd3f9c002c3ULL, + 0xcfe4f2b4180eb778ULL, 0xd9707b788e97e69dULL, 0xe2cde12d353c14b2ULL, 0xf45968e1a3a54557ULL, + + 0x0000000000000000ULL, 0x0aed36d1a3bb9d7fULL, 0x15da6da347773afeULL, 0x1f375b72e4cca781ULL, + 0x2bb4db468eee75fcULL, 0x2159ed972d55e883ULL, 0x3e6eb6e5c9994f02ULL, 0x348380346a22d27dULL, + 0x5769b68d1ddcebf8ULL, 0x5d84805cbe677687ULL, 0x42b3db2e5aabd106ULL, 0x485eedfff9104c79ULL, + 0x7cdd6dcb93329e04ULL, 0x76305b1a3089037bULL, 0x69070068d445a4faULL, 0x63ea36b977fe3985ULL, + 0xaed36d1a3bb9d7f0ULL, 0xa43e5bcb98024a8fULL, 0xbb0900b97cceed0eULL, 0xb1e43668df757071ULL, + 0x8567b65cb557a20cULL, 0x8f8a808d16ec3f73ULL, 0x90bddbfff22098f2ULL, 0x9a50ed2e519b058dULL, + 0xf9badb9726653c08ULL, 0xf357ed4685dea177ULL, 0xec60b634611206f6ULL, 0xe68d80e5c2a99b89ULL, + 0xd20e00d1a88b49f4ULL, 0xd8e336000b30d48bULL, 0xc7d46d72effc730aULL, 0xcd395ba34c47ee75ULL, + 0x697ffc672fe43c8bULL, 0x6392cab68c5fa1f4ULL, 0x7ca591c468930675ULL, 0x7648a715cb289b0aULL, + 0x42cb2721a10a4977ULL, 0x482611f002b1d408ULL, 0x57114a82e67d7389ULL, 0x5dfc7c5345c6eef6ULL, + 0x3e164aea3238d773ULL, 0x34fb7c3b91834a0cULL, 0x2bcc2749754fed8dULL, 0x21211198d6f470f2ULL, + 0x15a291acbcd6a28fULL, 0x1f4fa77d1f6d3ff0ULL, 0x0078fc0ffba19871ULL, 0x0a95cade581a050eULL, + 0xc7ac917d145deb7bULL, 0xcd41a7acb7e67604ULL, 0xd276fcde532ad185ULL, 0xd89bca0ff0914cfaULL, + 0xec184a3b9ab39e87ULL, 0xe6f57cea390803f8ULL, 0xf9c22798ddc4a479ULL, 0xf32f11497e7f3906ULL, + 0x90c527f009810083ULL, 0x9a281121aa3a9dfcULL, 0x851f4a534ef63a7dULL, 0x8ff27c82ed4da702ULL, + 0xbb71fcb6876f757fULL, 0xb19cca6724d4e800ULL, 0xaeab9115c0184f81ULL, 0xa446a7c463a3d2feULL, + 0xd2fff8ce5fc87916ULL, 0xd812ce1ffc73e469ULL, 0xc725956d18bf43e8ULL, 0xcdc8a3bcbb04de97ULL, + 0xf94b2388d1260ceaULL, 0xf3a61559729d9195ULL, 0xec914e2b96513614ULL, 0xe67c78fa35eaab6bULL, + 0x85964e43421492eeULL, 0x8f7b7892e1af0f91ULL, 0x904c23e00563a810ULL, 0x9aa11531a6d8356fULL, + 0xae229505ccfae712ULL, 0xa4cfa3d46f417a6dULL, 0xbbf8f8a68b8dddecULL, 0xb115ce7728364093ULL, + 0x7c2c95d46471aee6ULL, 0x76c1a305c7ca3399ULL, 0x69f6f87723069418ULL, 0x631bcea680bd0967ULL, + 0x57984e92ea9fdb1aULL, 0x5d75784349244665ULL, 0x42422331ade8e1e4ULL, 0x48af15e00e537c9bULL, + 0x2b45235979ad451eULL, 0x21a81588da16d861ULL, 0x3e9f4efa3eda7fe0ULL, 0x3472782b9d61e29fULL, + 0x00f1f81ff74330e2ULL, 0x0a1ccece54f8ad9dULL, 0x152b95bcb0340a1cULL, 0x1fc6a36d138f9763ULL, + 0xbb8004a9702c459dULL, 0xb16d3278d397d8e2ULL, 0xae5a690a375b7f63ULL, 0xa4b75fdb94e0e21cULL, + 0x9034dfeffec23061ULL, 0x9ad9e93e5d79ad1eULL, 0x85eeb24cb9b50a9fULL, 0x8f03849d1a0e97e0ULL, + 0xece9b2246df0ae65ULL, 0xe60484f5ce4b331aULL, 0xf933df872a87949bULL, 0xf3dee956893c09e4ULL, + 0xc75d6962e31edb99ULL, 0xcdb05fb340a546e6ULL, 0xd28704c1a469e167ULL, 0xd86a321007d27c18ULL, + 0x155369b34b95926dULL, 0x1fbe5f62e82e0f12ULL, 0x008904100ce2a893ULL, 0x0a6432c1af5935ecULL, + 0x3ee7b2f5c57be791ULL, 0x340a842466c07aeeULL, 0x2b3ddf56820cdd6fULL, 0x21d0e98721b74010ULL, + 0x423adf3e56497995ULL, 0x48d7e9eff5f2e4eaULL, 0x57e0b29d113e436bULL, 0x5d0d844cb285de14ULL, + 0x698e0478d8a70c69ULL, 0x636332a97b1c9116ULL, 0x7c5469db9fd03697ULL, 0x76b95f0a3c6babe8ULL, + 0x9126d7cfe7076147ULL, 0x9bcbe11e44bcfc38ULL, 0x84fcba6ca0705bb9ULL, 0x8e118cbd03cbc6c6ULL, + 0xba920c8969e914bbULL, 0xb07f3a58ca5289c4ULL, 0xaf48612a2e9e2e45ULL, 0xa5a557fb8d25b33aULL, + 0xc64f6142fadb8abfULL, 0xcca25793596017c0ULL, 0xd3950ce1bdacb041ULL, 0xd9783a301e172d3eULL, + 0xedfbba047435ff43ULL, 0xe7168cd5d78e623cULL, 0xf821d7a73342c5bdULL, 0xf2cce17690f958c2ULL, + 0x3ff5bad5dcbeb6b7ULL, 0x35188c047f052bc8ULL, 0x2a2fd7769bc98c49ULL, 0x20c2e1a738721136ULL, + 0x144161935250c34bULL, 0x1eac5742f1eb5e34ULL, 0x019b0c301527f9b5ULL, 0x0b763ae1b69c64caULL, + 0x689c0c58c1625d4fULL, 0x62713a8962d9c030ULL, 0x7d4661fb861567b1ULL, 0x77ab572a25aefaceULL, + 0x4328d71e4f8c28b3ULL, 0x49c5e1cfec37b5ccULL, 0x56f2babd08fb124dULL, 0x5c1f8c6cab408f32ULL, + 0xf8592ba8c8e35dccULL, 0xf2b41d796b58c0b3ULL, 0xed83460b8f946732ULL, 0xe76e70da2c2ffa4dULL, + 0xd3edf0ee460d2830ULL, 0xd900c63fe5b6b54fULL, 0xc6379d4d017a12ceULL, 0xccdaab9ca2c18fb1ULL, + 0xaf309d25d53fb634ULL, 0xa5ddabf476842b4bULL, 0xbaeaf08692488ccaULL, 0xb007c65731f311b5ULL, + 0x848446635bd1c3c8ULL, 0x8e6970b2f86a5eb7ULL, 0x915e2bc01ca6f936ULL, 0x9bb31d11bf1d6449ULL, + 0x568a46b2f35a8a3cULL, 0x5c67706350e11743ULL, 0x43502b11b42db0c2ULL, 0x49bd1dc017962dbdULL, + 0x7d3e9df47db4ffc0ULL, 0x77d3ab25de0f62bfULL, 0x68e4f0573ac3c53eULL, 0x6209c68699785841ULL, + 0x01e3f03fee8661c4ULL, 0x0b0ec6ee4d3dfcbbULL, 0x14399d9ca9f15b3aULL, 0x1ed4ab4d0a4ac645ULL, + 0x2a572b7960681438ULL, 0x20ba1da8c3d38947ULL, 0x3f8d46da271f2ec6ULL, 0x3560700b84a4b3b9ULL, + 0x43d92f01b8cf1851ULL, 0x493419d01b74852eULL, 0x560342a2ffb822afULL, 0x5cee74735c03bfd0ULL, + 0x686df44736216dadULL, 0x6280c296959af0d2ULL, 0x7db799e471565753ULL, 0x775aaf35d2edca2cULL, + 0x14b0998ca513f3a9ULL, 0x1e5daf5d06a86ed6ULL, 0x016af42fe264c957ULL, 0x0b87c2fe41df5428ULL, + 0x3f0442ca2bfd8655ULL, 0x35e9741b88461b2aULL, 0x2ade2f696c8abcabULL, 0x203319b8cf3121d4ULL, + 0xed0a421b8376cfa1ULL, 0xe7e774ca20cd52deULL, 0xf8d02fb8c401f55fULL, 0xf23d196967ba6820ULL, + 0xc6be995d0d98ba5dULL, 0xcc53af8cae232722ULL, 0xd364f4fe4aef80a3ULL, 0xd989c22fe9541ddcULL, + 0xba63f4969eaa2459ULL, 0xb08ec2473d11b926ULL, 0xafb99935d9dd1ea7ULL, 0xa554afe47a6683d8ULL, + 0x91d72fd0104451a5ULL, 0x9b3a1901b3ffccdaULL, 0x840d427357336b5bULL, 0x8ee074a2f488f624ULL, + 0x2aa6d366972b24daULL, 0x204be5b73490b9a5ULL, 0x3f7cbec5d05c1e24ULL, 0x3591881473e7835bULL, + 0x0112082019c55126ULL, 0x0bff3ef1ba7ecc59ULL, 0x14c865835eb26bd8ULL, 0x1e255352fd09f6a7ULL, + 0x7dcf65eb8af7cf22ULL, 0x7722533a294c525dULL, 0x68150848cd80f5dcULL, 0x62f83e996e3b68a3ULL, + 0x567bbead0419badeULL, 0x5c96887ca7a227a1ULL, 0x43a1d30e436e8020ULL, 0x494ce5dfe0d51d5fULL, + 0x8475be7cac92f32aULL, 0x8e9888ad0f296e55ULL, 0x91afd3dfebe5c9d4ULL, 0x9b42e50e485e54abULL, + 0xafc1653a227c86d6ULL, 0xa52c53eb81c71ba9ULL, 0xba1b0899650bbc28ULL, 0xb0f63e48c6b02157ULL, + 0xd31c08f1b14e18d2ULL, 0xd9f13e2012f585adULL, 0xc6c66552f639222cULL, 0xcc2b53835582bf53ULL, + 0xf8a8d3b73fa06d2eULL, 0xf245e5669c1bf051ULL, 0xed72be1478d757d0ULL, 0xe79f88c5db6ccaafULL, + + 0x0000000000000000ULL, 0xb0bc2e589204f500ULL, 0x55a17ae27c9e796bULL, 0xe51d54baee9a8c6bULL, + 0xab42f5c4f93cf2d6ULL, 0x1bfedb9c6b3807d6ULL, 0xfee38f2685a28bbdULL, 0x4e5fa17e17a67ebdULL, + 0x625ccddaaaee76c7ULL, 0xd2e0e38238ea83c7ULL, 0x37fdb738d6700facULL, 0x874199604474faacULL, + 0xc91e381e53d28411ULL, 0x79a21646c1d67111ULL, 0x9cbf42fc2f4cfd7aULL, 0x2c036ca4bd48087aULL, + 0xc4b99bb555dced8eULL, 0x7405b5edc7d8188eULL, 0x9118e157294294e5ULL, 0x21a4cf0fbb4661e5ULL, + 0x6ffb6e71ace01f58ULL, 0xdf4740293ee4ea58ULL, 0x3a5a1493d07e6633ULL, 0x8ae63acb427a9333ULL, + 0xa6e5566fff329b49ULL, 0x165978376d366e49ULL, 0xf3442c8d83ace222ULL, 0x43f802d511a81722ULL, + 0x0da7a3ab060e699fULL, 0xbd1b8df3940a9c9fULL, 0x5806d9497a9010f4ULL, 0xe8baf711e894e5f4ULL, + 0xbdaa1139f32e4877ULL, 0x0d163f61612abd77ULL, 0xe80b6bdb8fb0311cULL, 0x58b745831db4c41cULL, + 0x16e8e4fd0a12baa1ULL, 0xa654caa598164fa1ULL, 0x43499e1f768cc3caULL, 0xf3f5b047e48836caULL, + 0xdff6dce359c03eb0ULL, 0x6f4af2bbcbc4cbb0ULL, 0x8a57a601255e47dbULL, 0x3aeb8859b75ab2dbULL, + 0x74b42927a0fccc66ULL, 0xc408077f32f83966ULL, 0x211553c5dc62b50dULL, 0x91a97d9d4e66400dULL, + 0x79138a8ca6f2a5f9ULL, 0xc9afa4d434f650f9ULL, 0x2cb2f06eda6cdc92ULL, 0x9c0ede3648682992ULL, + 0xd2517f485fce572fULL, 0x62ed5110cdcaa22fULL, 0x87f005aa23502e44ULL, 0x374c2bf2b154db44ULL, + 0x1b4f47560c1cd33eULL, 0xabf3690e9e18263eULL, 0x4eee3db47082aa55ULL, 0xfe5213ece2865f55ULL, + 0xb00db292f52021e8ULL, 0x00b19cca6724d4e8ULL, 0xe5acc87089be5883ULL, 0x5510e6281bbaad83ULL, + 0x4f8d0420becb0385ULL, 0xff312a782ccff685ULL, 0x1a2c7ec2c2557aeeULL, 0xaa90509a50518feeULL, + 0xe4cff1e447f7f153ULL, 0x5473dfbcd5f30453ULL, 0xb16e8b063b698838ULL, 0x01d2a55ea96d7d38ULL, + 0x2dd1c9fa14257542ULL, 0x9d6de7a286218042ULL, 0x7870b31868bb0c29ULL, 0xc8cc9d40fabff929ULL, + 0x86933c3eed198794ULL, 0x362f12667f1d7294ULL, 0xd33246dc9187feffULL, 0x638e688403830bffULL, + 0x8b349f95eb17ee0bULL, 0x3b88b1cd79131b0bULL, 0xde95e57797899760ULL, 0x6e29cb2f058d6260ULL, + 0x20766a51122b1cddULL, 0x90ca4409802fe9ddULL, 0x75d710b36eb565b6ULL, 0xc56b3eebfcb190b6ULL, + 0xe968524f41f998ccULL, 0x59d47c17d3fd6dccULL, 0xbcc928ad3d67e1a7ULL, 0x0c7506f5af6314a7ULL, + 0x422aa78bb8c56a1aULL, 0xf29689d32ac19f1aULL, 0x178bdd69c45b1371ULL, 0xa737f331565fe671ULL, + 0xf22715194de54bf2ULL, 0x429b3b41dfe1bef2ULL, 0xa7866ffb317b3299ULL, 0x173a41a3a37fc799ULL, + 0x5965e0ddb4d9b924ULL, 0xe9d9ce8526dd4c24ULL, 0x0cc49a3fc847c04fULL, 0xbc78b4675a43354fULL, + 0x907bd8c3e70b3d35ULL, 0x20c7f69b750fc835ULL, 0xc5daa2219b95445eULL, 0x75668c790991b15eULL, + 0x3b392d071e37cfe3ULL, 0x8b85035f8c333ae3ULL, 0x6e9857e562a9b688ULL, 0xde2479bdf0ad4388ULL, + 0x369e8eac1839a67cULL, 0x8622a0f48a3d537cULL, 0x633ff44e64a7df17ULL, 0xd383da16f6a32a17ULL, + 0x9ddc7b68e10554aaULL, 0x2d6055307301a1aaULL, 0xc87d018a9d9b2dc1ULL, 0x78c12fd20f9fd8c1ULL, + 0x54c24376b2d7d0bbULL, 0xe47e6d2e20d325bbULL, 0x01633994ce49a9d0ULL, 0xb1df17cc5c4d5cd0ULL, + 0xff80b6b24beb226dULL, 0x4f3c98ead9efd76dULL, 0xaa21cc5037755b06ULL, 0x1a9de208a571ae06ULL, + 0x9f1a08417d96070aULL, 0x2fa62619ef92f20aULL, 0xcabb72a301087e61ULL, 0x7a075cfb930c8b61ULL, + 0x3458fd8584aaf5dcULL, 0x84e4d3dd16ae00dcULL, 0x61f98767f8348cb7ULL, 0xd145a93f6a3079b7ULL, + 0xfd46c59bd77871cdULL, 0x4dfaebc3457c84cdULL, 0xa8e7bf79abe608a6ULL, 0x185b912139e2fda6ULL, + 0x5604305f2e44831bULL, 0xe6b81e07bc40761bULL, 0x03a54abd52dafa70ULL, 0xb31964e5c0de0f70ULL, + 0x5ba393f4284aea84ULL, 0xeb1fbdacba4e1f84ULL, 0x0e02e91654d493efULL, 0xbebec74ec6d066efULL, + 0xf0e16630d1761852ULL, 0x405d48684372ed52ULL, 0xa5401cd2ade86139ULL, 0x15fc328a3fec9439ULL, + 0x39ff5e2e82a49c43ULL, 0x8943707610a06943ULL, 0x6c5e24ccfe3ae528ULL, 0xdce20a946c3e1028ULL, + 0x92bdabea7b986e95ULL, 0x220185b2e99c9b95ULL, 0xc71cd108070617feULL, 0x77a0ff509502e2feULL, + 0x22b019788eb84f7dULL, 0x920c37201cbcba7dULL, 0x7711639af2263616ULL, 0xc7ad4dc26022c316ULL, + 0x89f2ecbc7784bdabULL, 0x394ec2e4e58048abULL, 0xdc53965e0b1ac4c0ULL, 0x6cefb806991e31c0ULL, + 0x40ecd4a2245639baULL, 0xf050fafab652ccbaULL, 0x154dae4058c840d1ULL, 0xa5f18018caccb5d1ULL, + 0xebae2166dd6acb6cULL, 0x5b120f3e4f6e3e6cULL, 0xbe0f5b84a1f4b207ULL, 0x0eb375dc33f04707ULL, + 0xe60982cddb64a2f3ULL, 0x56b5ac95496057f3ULL, 0xb3a8f82fa7fadb98ULL, 0x0314d67735fe2e98ULL, + 0x4d4b770922585025ULL, 0xfdf75951b05ca525ULL, 0x18ea0deb5ec6294eULL, 0xa85623b3ccc2dc4eULL, + 0x84554f17718ad434ULL, 0x34e9614fe38e2134ULL, 0xd1f435f50d14ad5fULL, 0x61481bad9f10585fULL, + 0x2f17bad388b626e2ULL, 0x9fab948b1ab2d3e2ULL, 0x7ab6c031f4285f89ULL, 0xca0aee69662caa89ULL, + 0xd0970c61c35d048fULL, 0x602b22395159f18fULL, 0x85367683bfc37de4ULL, 0x358a58db2dc788e4ULL, + 0x7bd5f9a53a61f659ULL, 0xcb69d7fda8650359ULL, 0x2e74834746ff8f32ULL, 0x9ec8ad1fd4fb7a32ULL, + 0xb2cbc1bb69b37248ULL, 0x0277efe3fbb78748ULL, 0xe76abb59152d0b23ULL, 0x57d695018729fe23ULL, + 0x1989347f908f809eULL, 0xa9351a27028b759eULL, 0x4c284e9dec11f9f5ULL, 0xfc9460c57e150cf5ULL, + 0x142e97d49681e901ULL, 0xa492b98c04851c01ULL, 0x418fed36ea1f906aULL, 0xf133c36e781b656aULL, + 0xbf6c62106fbd1bd7ULL, 0x0fd04c48fdb9eed7ULL, 0xeacd18f2132362bcULL, 0x5a7136aa812797bcULL, + 0x76725a0e3c6f9fc6ULL, 0xc6ce7456ae6b6ac6ULL, 0x23d320ec40f1e6adULL, 0x936f0eb4d2f513adULL, + 0xdd30afcac5536d10ULL, 0x6d8c819257579810ULL, 0x8891d528b9cd147bULL, 0x382dfb702bc9e17bULL, + 0x6d3d1d5830734cf8ULL, 0xdd813300a277b9f8ULL, 0x389c67ba4ced3593ULL, 0x882049e2dee9c093ULL, + 0xc67fe89cc94fbe2eULL, 0x76c3c6c45b4b4b2eULL, 0x93de927eb5d1c745ULL, 0x2362bc2627d53245ULL, + 0x0f61d0829a9d3a3fULL, 0xbfddfeda0899cf3fULL, 0x5ac0aa60e6034354ULL, 0xea7c84387407b654ULL, + 0xa423254663a1c8e9ULL, 0x149f0b1ef1a53de9ULL, 0xf1825fa41f3fb182ULL, 0x413e71fc8d3b4482ULL, + 0xa98486ed65afa176ULL, 0x1938a8b5f7ab5476ULL, 0xfc25fc0f1931d81dULL, 0x4c99d2578b352d1dULL, + 0x02c673299c9353a0ULL, 0xb27a5d710e97a6a0ULL, 0x576709cbe00d2acbULL, 0xe7db27937209dfcbULL, + 0xcbd84b37cf41d7b1ULL, 0x7b64656f5d4522b1ULL, 0x9e7931d5b3dfaedaULL, 0x2ec51f8d21db5bdaULL, + 0x609abef3367d2567ULL, 0xd02690aba479d067ULL, 0x353bc4114ae35c0cULL, 0x8587ea49d8e7a90cULL, + }; + + uint64_t update_crc64_builtin_impl(const uint8_t* data, size_t size, uint64_t crc64) + { + uint64_t u_crc = crc64 ^ ~0ULL; + + uint64_t p_data = 0; + + size_t u_stop = size - (size % 32); + if (u_stop >= 2 * 32) + { + const uint64_t* wdata = reinterpret_cast(data); + + uint64_t u_crc0 = 0; + uint64_t u_crc1 = 0; + uint64_t u_crc2 = 0; + uint64_t u_crc3 = 0; + uint64_t p_last = p_data + u_stop - 32; + size -= u_stop; + u_crc0 = u_crc; + + for (; p_data < p_last; p_data += 32, wdata += 4) + { + uint64_t b0 = wdata[0] ^ u_crc0; + uint64_t b1 = wdata[1] ^ u_crc1; + uint64_t b2 = wdata[2] ^ u_crc2; + uint64_t b3 = wdata[3] ^ u_crc3; + + u_crc0 = m_u32[7 * 256 + (b0 & 0xff)]; + b0 >>= 8; + u_crc1 = m_u32[7 * 256 + (b1 & 0xff)]; + b1 >>= 8; + u_crc2 = m_u32[7 * 256 + (b2 & 0xff)]; + b2 >>= 8; + u_crc3 = m_u32[7 * 256 + (b3 & 0xff)]; + b3 >>= 8; + + u_crc0 ^= m_u32[6 * 256 + (b0 & 0xff)]; + b0 >>= 8; + u_crc1 ^= m_u32[6 * 256 + (b1 & 0xff)]; + b1 >>= 8; + u_crc2 ^= m_u32[6 * 256 + (b2 & 0xff)]; + b2 >>= 8; + u_crc3 ^= m_u32[6 * 256 + (b3 & 0xff)]; + b3 >>= 8; + + u_crc0 ^= m_u32[5 * 256 + (b0 & 0xff)]; + b0 >>= 8; + u_crc1 ^= m_u32[5 * 256 + (b1 & 0xff)]; + b1 >>= 8; + u_crc2 ^= m_u32[5 * 256 + (b2 & 0xff)]; + b2 >>= 8; + u_crc3 ^= m_u32[5 * 256 + (b3 & 0xff)]; + b3 >>= 8; + + u_crc0 ^= m_u32[4 * 256 + (b0 & 0xff)]; + b0 >>= 8; + u_crc1 ^= m_u32[4 * 256 + (b1 & 0xff)]; + b1 >>= 8; + u_crc2 ^= m_u32[4 * 256 + (b2 & 0xff)]; + b2 >>= 8; + u_crc3 ^= m_u32[4 * 256 + (b3 & 0xff)]; + b3 >>= 8; + + u_crc0 ^= m_u32[3 * 256 + (b0 & 0xff)]; + b0 >>= 8; + u_crc1 ^= m_u32[3 * 256 + (b1 & 0xff)]; + b1 >>= 8; + u_crc2 ^= m_u32[3 * 256 + (b2 & 0xff)]; + b2 >>= 8; + u_crc3 ^= m_u32[3 * 256 + (b3 & 0xff)]; + b3 >>= 8; + + u_crc0 ^= m_u32[2 * 256 + (b0 & 0xff)]; + b0 >>= 8; + u_crc1 ^= m_u32[2 * 256 + (b1 & 0xff)]; + b1 >>= 8; + u_crc2 ^= m_u32[2 * 256 + (b2 & 0xff)]; + b2 >>= 8; + u_crc3 ^= m_u32[2 * 256 + (b3 & 0xff)]; + b3 >>= 8; + + u_crc0 ^= m_u32[1 * 256 + (b0 & 0xff)]; + b0 >>= 8; + u_crc1 ^= m_u32[1 * 256 + (b1 & 0xff)]; + b1 >>= 8; + u_crc2 ^= m_u32[1 * 256 + (b2 & 0xff)]; + b2 >>= 8; + u_crc3 ^= m_u32[1 * 256 + (b3 & 0xff)]; + b3 >>= 8; + + u_crc0 ^= m_u32[0 * 256 + (b0 & 0xff)]; + u_crc1 ^= m_u32[0 * 256 + (b1 & 0xff)]; + u_crc2 ^= m_u32[0 * 256 + (b2 & 0xff)]; + u_crc3 ^= m_u32[0 * 256 + (b3 & 0xff)]; + } + + u_crc = 0; + u_crc ^= wdata[0] ^ u_crc0; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + + u_crc ^= wdata[1] ^ u_crc1; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + + u_crc ^= wdata[2] ^ u_crc2; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + + u_crc ^= wdata[3] ^ u_crc3; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + u_crc = (u_crc >> 8) ^ m_u1[u_crc & 0xff]; + + p_data += 32; + } + + for (uint64_t u_bytes = 0; u_bytes < size; ++u_bytes, ++p_data) + { + u_crc = (u_crc >> 8) ^ m_u1[(u_crc ^ data[p_data]) & 0xff]; + } + return u_crc ^ ~0ULL; + } + + static std::function crc64_update_func = update_crc64_builtin_impl; + + uint64_t update_crc64(const uint8_t* data, size_t size, uint64_t crc) + { + return crc64_update_func(data, size, crc); + } + + void set_crc64_func(std::function func) + { + crc64_update_func = func ? std::move(func) : update_crc64_builtin_impl; + } + +}} // namespace azure::storage \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/src/executor.cpp b/Microsoft.WindowsAzure.Storage/src/executor.cpp new file mode 100644 index 00000000..812a5a16 --- /dev/null +++ b/Microsoft.WindowsAzure.Storage/src/executor.cpp @@ -0,0 +1,418 @@ +// ----------------------------------------------------------------------------------------- +// +// Copyright 2013 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +// +// ----------------------------------------------------------------------------------------- + +#include "stdafx.h" + +#include "wascore/executor.h" + +namespace azure { namespace storage { namespace core { + pplx::task executor_impl::execute_async(std::shared_ptr command, const request_options& options, operation_context context) + { + if (!context.start_time().is_initialized()) + { + context.set_start_time(utility::datetime::utc_now()); + } + + // TODO: Use "it" variable name for iterators in for loops + // TODO: Reduce usage of auto variable types + auto instance = std::make_shared(command, options, context); + return pplx::details::_do_while([instance]() -> pplx::task + { + // Start the timer to track timeout. + if (instance->m_command->m_use_timeout && !instance->m_command->m_timer_handler->timer_started()) + { + // Timer will be stopped when instance is out of scope, so no need to stop here. + instance->m_command->m_timer_handler->start_timer(instance->m_request_options.maximum_execution_time()); + } + // 0. Begin request + instance->assert_canceled(); + instance->validate_location_mode(); + + // 1. Build request + instance->assert_canceled(); + instance->m_start_time = utility::datetime::utc_now(); + instance->m_uri_builder = web::http::uri_builder(instance->m_command->m_request_uri.get_location_uri(instance->m_current_location)); + instance->m_request = instance->m_command->m_build_request(instance->m_uri_builder, instance->m_request_options.server_timeout(), instance->m_context); + instance->m_request_result = request_result(instance->m_start_time, instance->m_current_location); + + if (logger::instance().should_log(instance->m_context, client_log_level::log_level_informational)) + { + utility::string_t str; + str.reserve(256); + str.append(_XPLATSTR("Starting ")).append(instance->m_request.method()).append(_XPLATSTR(" request to ")).append(instance->m_request.request_uri().to_string()); + logger::instance().log(instance->m_context, client_log_level::log_level_informational, str); + } + + // 2. Set Headers + instance->assert_canceled(); + auto& client_request_id = instance->m_context.client_request_id(); + if (!client_request_id.empty()) + { + instance->add_request_header(protocol::ms_header_client_request_id, client_request_id); + } + + auto& user_headers = instance->m_context.user_headers(); + for (auto iter = user_headers.begin(); iter != user_headers.end(); ++iter) + { + instance->add_request_header(iter->first, iter->second); + } + + // If the command provided a request body, set it on the http_request object + if (instance->m_command->m_request_body.is_valid()) + { + instance->m_command->m_request_body.rewind(); + instance->m_request.set_body(instance->m_command->m_request_body.stream(), instance->m_command->m_request_body.length(), utility::string_t()); + } + + // If the command wants to copy the response body to a stream, set it + // on the http_request object + if (instance->m_command->m_destination_stream) + { + // Calculate the length and MD5 hash if needed as the incoming data is read + if (!instance->m_is_hashing_started) + { + if (instance->m_command->m_calculate_response_body_checksum == checksum_type::md5) + { + instance->m_hash_provider = hash_provider::create_md5_hash_provider(); + } + else if (instance->m_command->m_calculate_response_body_checksum == checksum_type::crc64) + { + instance->m_hash_provider = hash_provider::create_crc64_hash_provider(); + } + + instance->m_total_downloaded = 0; + instance->m_is_hashing_started = true; + instance->m_should_restart_hash_provider = false; + + // TODO: Consider using hash_provider::is_enabled instead of m_is_hashing_started to signal when the hash provider has been closed + } + + if (instance->m_should_restart_hash_provider) + { + if (instance->m_command->m_calculate_response_body_checksum == checksum_type::md5) + { + instance->m_hash_provider = hash_provider::create_md5_hash_provider(); + } + else if (instance->m_command->m_calculate_response_body_checksum == checksum_type::crc64) + { + instance->m_hash_provider = hash_provider::create_crc64_hash_provider(); + } + instance->m_should_restart_hash_provider = false; + } + + instance->m_response_streambuf = hash_wrapper_streambuf(instance->m_command->m_destination_stream.streambuf(), instance->m_hash_provider); + instance->m_request.set_response_stream(instance->m_response_streambuf.create_ostream()); + } + + // Let the user know we are ready to send + auto sending_request = instance->m_context._get_impl()->sending_request(); + if (sending_request) + { + sending_request(instance->m_request, instance->m_context); + } + + // 3. Sign Request + instance->assert_canceled(); + instance->m_command->m_sign_request(instance->m_request, instance->m_context); + + // 4. Set HTTP client configuration + instance->assert_canceled(); + web::http::client::http_client_config config; + + config.set_proxy(instance->m_context.proxy()); + + instance->remaining_time(); + config.set_timeout(instance->m_request_options.noactivity_timeout()); + + size_t http_buffer_size = instance->m_request_options.http_buffer_size(); + if (http_buffer_size > 0) + { + config.set_chunksize(http_buffer_size); + } +#ifndef _WIN32 + if (instance->m_context._get_impl()->get_ssl_context_callback() != nullptr) + { + config.set_ssl_context_callback(instance->m_context._get_impl()->get_ssl_context_callback()); + } +#endif + if (instance->m_context._get_impl()->get_native_session_handle_options_callback() != nullptr) + { + config.set_nativesessionhandle_options(instance->m_context._get_impl()->get_native_session_handle_options_callback()); + } + + config.set_validate_certificates(instance->m_request_options.validate_certificates()); + + // 5-6. Potentially upload data and get response + instance->assert_canceled(); +#ifdef _WIN32 + web::http::client::http_client client(instance->m_request.request_uri().authority(), config); + return client.request(instance->m_request, instance->m_command->get_cancellation_token()).then([instance](pplx::task get_headers_task)->pplx::task +#else + std::shared_ptr client = core::http_client_reusable::get_http_client(instance->m_request.request_uri().authority(), config); + return client->request(instance->m_request, instance->m_command->get_cancellation_token()).then([instance](pplx::task get_headers_task)->pplx::task +#endif // _WIN32 + { + // Headers are ready. It should be noted that http_client will + // continue to download the response body in parallel. + web::http::http_response response = get_headers_task.get(); + + if (logger::instance().should_log(instance->m_context, client_log_level::log_level_informational)) + { + utility::string_t str; + str.reserve(128); + str.append(_XPLATSTR("Response received. Status code = ")).append(core::convert_to_string(response.status_code())).append(_XPLATSTR(". Reason = ")).append(response.reason_phrase()); + logger::instance().log(instance->m_context, client_log_level::log_level_informational, str); + } + + try + { + // Let the user know we received response + auto response_received = instance->m_context._get_impl()->response_received(); + if (response_received) + { + response_received(instance->m_request, response, instance->m_context); + } + + // 7. Do Response parsing (headers etc, no stream available here) + // This is when the status code will be checked and m_preprocess_response + // will throw a storage_exception if it is not expected. + instance->m_request_result = request_result(instance->m_start_time, instance->m_current_location, response, false); + instance->m_command->preprocess_response(response, instance->m_request_result, instance->m_context); + + if (logger::instance().should_log(instance->m_context, client_log_level::log_level_informational)) + { + logger::instance().log(instance->m_context, client_log_level::log_level_informational, _XPLATSTR("Successful request ID = ") + instance->m_request_result.service_request_id()); + } + + // 8. Potentially download data + return response.content_ready(); + } + catch (const storage_exception& e) + { + // If the exception already contains an error message, the issue is not with + // the response, so re-throwing is the right thing. + if (e.what() != NULL && e.what()[0] != '\0') + { + instance->assert_canceled(); + throw; + } + + // Otherwise, response body might contain an error coming from the Storage service. + + return response.content_ready().then([instance](pplx::task get_error_body_task) -> web::http::http_response + { + auto response = get_error_body_task.get(); + + if (!instance->m_command->m_destination_stream) + { + // However, if the command has a destination stream, there is no guarantee that it + // is seek-able and thus it cannot be read back to parse the error. + instance->m_request_result = request_result(instance->m_start_time, instance->m_current_location, response, true); + } + else + { + // Command has a destination stream. In this case, error information + // contained in response body might have been written into the destination + // stream. Need recreate the hash_provider since a retry might be needed. + instance->m_is_hashing_started = false; + } + + if (logger::instance().should_log(instance->m_context, client_log_level::log_level_warning)) + { + logger::instance().log(instance->m_context, client_log_level::log_level_warning, _XPLATSTR("Failed request ID = ") + instance->m_request_result.service_request_id()); + } + + instance->assert_canceled(); + throw storage_exception(utility::conversions::to_utf8string(response.reason_phrase())); + }); + } + }).then([instance](pplx::task get_body_task) -> pplx::task + { + // 9. Evaluate response & parse results + web::http::http_response response = get_body_task.get(); + + if (instance->m_command->m_destination_stream) + { + utility::size64_t current_total_downloaded = instance->m_response_streambuf.total_written(); + utility::size64_t content_length = instance->m_request_result.content_length(); + if (content_length != std::numeric_limits::max() && current_total_downloaded != content_length) + { + // The download was interrupted before it could complete + instance->assert_canceled(); + throw storage_exception(protocol::error_incorrect_length); + } + } + + // It is now time to call m_postprocess_response + // Finish the checksum hash if checksum was being calculated + instance->m_hash_provider.close(); + instance->m_is_hashing_started = false; + + ostream_descriptor descriptor; + if (instance->m_response_streambuf) + { + utility::size64_t total_downloaded = instance->m_total_downloaded + instance->m_response_streambuf.total_written(); + descriptor = ostream_descriptor(total_downloaded, instance->m_hash_provider.hash()); + } + + return instance->m_command->postprocess_response(response, instance->m_request_result, descriptor, instance->m_context).then([instance](pplx::task result_task) + { + try + { + result_task.get(); + } + catch (const storage_exception& e) + { + if (e.result().is_response_available()) + { + instance->m_request_result.set_http_status_code(e.result().http_status_code()); + instance->m_request_result.set_extended_error(e.result().extended_error()); + } + + throw; + } + + }); + }).then([instance](pplx::task final_task) -> pplx::task + { + bool retryable_exception = true; + instance->m_context._get_impl()->add_request_result(instance->m_request_result); + // Currently this holds exception pointer to non storage exceptions (exceptions thrown from cpp_rest) + std::exception_ptr nonstorage_ex_ptr = nullptr; + + try + { + try + { + final_task.wait(); + } + catch (const storage_exception& e) + { + retryable_exception = e.retryable(); + throw; + } + catch (...) { + nonstorage_ex_ptr = std::current_exception(); + throw; + } + } + catch (const std::exception& e) + { + // + // exception thrown by previous steps are handled here below + // + + if (logger::instance().should_log(instance->m_context, client_log_level::log_level_warning)) + { + logger::instance().log(instance->m_context, client_log_level::log_level_warning, _XPLATSTR("Exception thrown while processing response: ") + utility::conversions::to_string_t(e.what())); + } + + if (!retryable_exception) + { + if (logger::instance().should_log(instance->m_context, client_log_level::log_level_error)) + { + logger::instance().log(instance->m_context, client_log_level::log_level_error, _XPLATSTR("Exception was not retryable: ") + utility::conversions::to_string_t(e.what())); + } + + throw storage_exception(e.what(), instance->m_request_result, capture_inner_exception(e), false); + } + + // If operation is canceled. + if (instance->m_command->get_cancellation_token().is_canceled()) + { + if (logger::instance().should_log(instance->m_context, client_log_level::log_level_informational)) + { + logger::instance().log(instance->m_context, client_log_level::log_level_informational, _XPLATSTR("Exception thrown while operation canceled: ") + utility::conversions::to_string_t(e.what())); + } + + // deal with canceling situation if the exception is protocol::error_operation_canceled, while throwing exception that is already thrown before canceling. + if (std::string(e.what()) == protocol::error_operation_canceled) + { + instance->assert_canceled(); + } + throw storage_exception(e.what(), instance->m_request_result, capture_inner_exception(e), false); + } + + // An exception occurred and thus the request might be retried. Ask the retry policy. + retry_context context(instance->m_retry_count++, instance->m_request_result, instance->get_next_location(), instance->m_current_location_mode, nonstorage_ex_ptr); + retry_info retry(instance->m_retry_policy.evaluate(context, instance->m_context)); + if (!retry.should_retry()) + { + if (logger::instance().should_log(instance->m_context, client_log_level::log_level_error)) + { + logger::instance().log(instance->m_context, client_log_level::log_level_error, _XPLATSTR("Retry policy did not allow for a retry, so throwing exception: ") + utility::conversions::to_string_t(e.what())); + } + + throw storage_exception(e.what(), instance->m_request_result, capture_inner_exception(e), false); + } + + instance->m_current_location = retry.target_location(); + instance->m_current_location_mode = retry.updated_location_mode(); + + // Hash provider may be closed by Casablanca due to stream error. Close hash provider and force to recreation when retry. + instance->m_hash_provider.close(); + instance->m_should_restart_hash_provider = true; + + if (instance->m_response_streambuf) + { + instance->m_total_downloaded += instance->m_response_streambuf.total_written(); + } + + // Try to recover the request. If it cannot be recovered, it cannot be retried + // even if the retry policy allowed for a retry. + if (instance->m_command->m_recover_request && + !instance->m_command->m_recover_request(instance->m_total_downloaded, instance->m_context)) + { + if (logger::instance().should_log(instance->m_context, client_log_level::log_level_error)) + { + logger::instance().log(instance->m_context, client_log_level::log_level_error, _XPLATSTR("Cannot recover request for retry, so throwing exception: ") + utility::conversions::to_string_t(e.what())); + } + + throw storage_exception(e.what(), instance->m_request_result, capture_inner_exception(e), false); + } + + if (logger::instance().should_log(instance->m_context, client_log_level::log_level_informational)) + { + utility::string_t str; + str.reserve(128); + str.append(_XPLATSTR("Retrying failed operation, number of retries: ")).append(core::convert_to_string(instance->m_retry_count)); + logger::instance().log(instance->m_context, client_log_level::log_level_informational, str); + } + + return complete_after(retry.retry_interval()).then([]() -> bool + { + // Returning true here will tell the outer do_while loop to loop one more time. + return true; + }); + } + + // Returning false here will cause do_while to exit. + return pplx::task_from_result(false); + }); + }).then([instance](pplx::task loop_task) + { + instance->m_context.set_end_time(utility::datetime::utc_now()); + loop_task.wait(); + + if (logger::instance().should_log(instance->m_context, client_log_level::log_level_informational)) + { + logger::instance().log(instance->m_context, client_log_level::log_level_informational, _XPLATSTR("Operation completed successfully")); + } + }); + } + +}}} // namespace azure::storage::core diff --git a/Microsoft.WindowsAzure.Storage/src/file_request_factory.cpp b/Microsoft.WindowsAzure.Storage/src/file_request_factory.cpp index 860c53c1..170e74b3 100644 --- a/Microsoft.WindowsAzure.Storage/src/file_request_factory.cpp +++ b/Microsoft.WindowsAzure.Storage/src/file_request_factory.cpp @@ -21,37 +21,180 @@ #include "wascore/resources.h" namespace azure { namespace storage { namespace protocol { - + void add_file_properties(web::http::http_request& request, const cloud_file_properties& properties) { web::http::http_headers& headers = request.headers(); - if (properties.length() > 0) - { - headers.add(_XPLATSTR("x-ms-content-length"), utility::conversions::print_string(properties.length())); - } + if (!core::is_empty_or_whitespace(properties.content_type())) { - headers.add(_XPLATSTR("x-ms-content-type"), properties.content_type()); + headers.add(ms_header_content_type, properties.content_type()); } if (!core::is_empty_or_whitespace(properties.content_encoding())) { - headers.add(_XPLATSTR("x-ms-content-encoding"), properties.content_encoding()); + headers.add(ms_header_content_encoding, properties.content_encoding()); } if (!core::is_empty_or_whitespace(properties.content_language())) { - headers.add(_XPLATSTR("x-ms-content-language"), properties.content_language()); + headers.add(ms_header_content_language, properties.content_language()); } if (!core::is_empty_or_whitespace(properties.cache_control())) { - headers.add(_XPLATSTR("x-ms-cache-control"), properties.cache_control()); + headers.add(ms_header_cache_control, properties.cache_control()); } if (!core::is_empty_or_whitespace(properties.content_md5())) { - headers.add(_XPLATSTR("x-ms-content-md5"), properties.content_md5()); + headers.add(ms_header_content_md5, properties.content_md5()); } if (!core::is_empty_or_whitespace(properties.content_disposition())) { - headers.add(_XPLATSTR("x-ms-content-disposition"), properties.content_disposition()); + headers.add(ms_header_content_disposition, properties.content_disposition()); + } + } + + utility::string_t file_properties_to_string(cloud_file_attributes value) + { + if (value == cloud_file_attributes::preserve) + { + return protocol::header_value_file_property_preserve; + } + if (value & cloud_file_attributes::source) + { + return protocol::header_value_file_property_source; + } + if (value & cloud_file_attributes::none) + { + return protocol::header_value_file_attribute_none; + } + + std::vector properties; + if (value & cloud_file_attributes::readonly) + { + properties.emplace_back(header_value_file_attribute_readonly); + } + if (value & cloud_file_attributes::hidden) + { + properties.emplace_back(header_value_file_attribute_hidden); + } + if (value & cloud_file_attributes::system) + { + properties.emplace_back(header_value_file_attribute_system); + } + if (value & cloud_file_attributes::directory) + { + properties.emplace_back(header_value_file_attribute_directory); + } + if (value & cloud_file_attributes::archive) + { + properties.emplace_back(header_value_file_attribute_archive); + } + if (value & cloud_file_attributes::temporary) + { + properties.emplace_back(header_value_file_attribute_temporary); + } + if (value & cloud_file_attributes::offline) + { + properties.emplace_back(header_value_file_attribute_offline); + } + if (value & cloud_file_attributes::not_content_indexed) + { + properties.emplace_back(header_value_file_attribute_notcontentindexed); + } + if (value & cloud_file_attributes::no_scrub_data) + { + properties.emplace_back(header_value_file_attribute_noscrubdata); + } + return core::string_join(properties, header_value_file_attribute_delimiter); + } + + enum class file_operation_type + { + create, + update, + copy, + }; + + template + void add_additional_properties(web::http::http_request& request, const Properties& properties, file_operation_type op_type) + { + web::http::http_headers& headers = request.headers(); + + bool permission_set = false; + if (!core::is_empty_or_whitespace(properties.permission_key())) + { + headers.add(ms_header_file_permission_key, properties.permission_key()); + permission_set = true; + } + if (!core::is_empty_or_whitespace(properties.permission())) + { + headers.add(ms_header_file_permission, properties.permission()); + permission_set = true; + } + if (!permission_set && op_type == file_operation_type::create) + { + headers.add(ms_header_file_permission, header_value_file_permission_inherit); + } + else if (!permission_set && op_type == file_operation_type::update) + { + headers.add(ms_header_file_permission, header_value_file_property_preserve); + } + if (op_type == file_operation_type::copy) + { + if (properties.permission() == header_value_file_property_source) + { + headers.remove(ms_header_file_permission); + headers.remove(ms_header_file_permission_key); + headers.add(ms_header_file_permission_copy_mode, header_value_file_property_source); + } + else if (permission_set) + { + headers.add(ms_header_file_permission_copy_mode, header_value_file_permission_override); + } + } + + auto attributes = properties.attributes(); + if (op_type == file_operation_type::create && attributes == cloud_file_attributes::preserve) + { + headers.add(ms_header_file_attributes, file_properties_to_string(cloud_file_attributes::none)); + } + else if (op_type == file_operation_type::copy && attributes == cloud_file_attributes::preserve) + { + } + else + { + headers.add(ms_header_file_attributes, file_properties_to_string(attributes)); + } + + if (properties.creation_time().is_initialized()) + { + headers.add(ms_header_file_creation_time, core::convert_to_iso8601_string(properties.creation_time(), 7)); + } + else if (op_type == file_operation_type::create) + { + headers.add(ms_header_file_creation_time, header_value_file_time_now); + } + else if (op_type == file_operation_type::update) + { + headers.add(ms_header_file_creation_time, header_value_file_property_preserve); + } + else if (op_type == file_operation_type::copy) + { + } + + if (properties.last_write_time().is_initialized()) + { + headers.add(ms_header_file_last_write_time, core::convert_to_iso8601_string(properties.last_write_time(), 7)); + } + else if (op_type == file_operation_type::create) + { + headers.add(ms_header_file_last_write_time, header_value_file_time_now); + } + else if (op_type == file_operation_type::update) + { + headers.add(ms_header_file_last_write_time, header_value_file_property_preserve); + } + else if (op_type == file_operation_type::copy) + { } } @@ -75,6 +218,11 @@ namespace azure { namespace storage { namespace protocol { } } + void add_access_condition(web::http::http_request& request, const file_access_condition& condition) + { + add_optional_header(request.headers(), ms_header_lease_id, condition.lease_id()); + } + web::http::http_request list_shares(const utility::string_t& prefix, bool get_metadata, int max_results, const continuation_token& token, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) { uri_builder.append_query(core::make_query_parameter(uri_query_component, component_list, /* do_encoding */ false)); @@ -108,7 +256,7 @@ namespace azure { namespace storage { namespace protocol { uri_builder.append_query(core::make_query_parameter(uri_query_resource_type, resource_share, /* do_encoding */ false)); web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); web::http::http_headers& headers = request.headers(); - if (max_size <= maximum_share_quota) + if (max_size != std::numeric_limits::max()) { headers.add(protocol::ms_header_share_quota, max_size); } @@ -134,10 +282,7 @@ namespace azure { namespace storage { namespace protocol { uri_builder.append_query(core::make_query_parameter(uri_query_resource_type, resource_share, /* do_encoding */ false)); uri_builder.append_query(core::make_query_parameter(uri_query_component, component_properties, /* do_encoding */ false)); web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); - if (properties.quota() <= protocol::maximum_share_quota) - { - request.headers().add(protocol::ms_header_share_quota, properties.quota()); - } + request.headers().add(protocol::ms_header_share_quota, properties.quota()); return request; } @@ -174,11 +319,30 @@ namespace azure { namespace storage { namespace protocol { return request; } - web::http::http_request create_file_directory(const cloud_metadata& metadata, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request get_file_share_permission(const utility::string_t& permission_key, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) + { + uri_builder.append_query(core::make_query_parameter(uri_query_resource_type, resource_share, /* do encoding */ false)); + uri_builder.append_query(core::make_query_parameter(uri_query_component, component_file_permission, /* do encoding */ false)); + web::http::http_request request(base_request(web::http::methods::GET, uri_builder, timeout, context)); + request.headers().add(protocol::ms_header_file_permission_key, permission_key); + return request; + } + + web::http::http_request set_file_share_permission(web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) + { + uri_builder.append_query(core::make_query_parameter(uri_query_resource_type, resource_share, /* do encoding */ false)); + uri_builder.append_query(core::make_query_parameter(uri_query_component, component_file_permission, /* do encoding */ false)); + web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); + request.headers().add(web::http::header_names::content_type, header_value_content_type_json); + return request; + } + + web::http::http_request create_file_directory(const cloud_metadata& metadata, const cloud_file_directory_properties& properties, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) { uri_builder.append_query(core::make_query_parameter(uri_query_resource_type, resource_directory, /* do_encoding */ false)); web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); add_metadata(request, metadata); + add_additional_properties(request, properties, file_operation_type::create); return request; } @@ -195,7 +359,18 @@ namespace azure { namespace storage { namespace protocol { web::http::http_request request(base_request(web::http::methods::HEAD, uri_builder, timeout, context)); return request; } - + + web::http::http_request set_file_directory_properties(const cloud_file_directory_properties& properties, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) + { + uri_builder.append_query(core::make_query_parameter(uri_query_resource_type, resource_directory, /* do_encoding */ false)); + uri_builder.append_query(core::make_query_parameter(uri_query_component, component_properties, /* do_encoding */ false)); + + web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); + add_additional_properties(request, properties, file_operation_type::update); + + return request; + } + web::http::http_request set_file_directory_metadata(const cloud_metadata& metadata, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) { uri_builder.append_query(core::make_query_parameter(uri_query_resource_type, resource_directory, /* do_encoding */ false)); @@ -205,11 +380,16 @@ namespace azure { namespace storage { namespace protocol { return request; } - web::http::http_request list_files_and_directories(int64_t max_results, const continuation_token& token, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request list_files_and_directories(const utility::string_t& prefix, int64_t max_results, const continuation_token& token, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) { uri_builder.append_query(core::make_query_parameter(uri_query_resource_type, resource_directory, /* do_encoding */ false)); uri_builder.append_query(core::make_query_parameter(uri_query_component, component_list, /* do_encoding */ false)); - + + if (!prefix.empty()) + { + uri_builder.append_query(core::make_query_parameter(uri_query_prefix, prefix)); + } + if (!token.empty()) { uri_builder.append_query(core::make_query_parameter(uri_query_marker, token.next_marker())); @@ -224,94 +404,110 @@ namespace azure { namespace storage { namespace protocol { return request; } - web::http::http_request create_file(const int64_t length, const cloud_metadata& metadata, const cloud_file_properties& properties, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request create_file(const int64_t length, const cloud_metadata& metadata, const cloud_file_properties& properties, const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) { web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); add_metadata(request, metadata); add_file_properties(request, properties); + add_additional_properties(request, properties, file_operation_type::create); add_optional_header(request.headers(), _XPLATSTR("x-ms-type"), _XPLATSTR("file")); - request.headers()[_XPLATSTR("x-ms-content-length")] = utility::conversions::print_string(length); + request.headers()[ms_header_content_length] = core::convert_to_string(length); + add_access_condition(request, condition); return request; } - web::http::http_request delete_file(web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request delete_file(const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) { web::http::http_request request(base_request(web::http::methods::DEL, uri_builder, timeout, context)); + add_access_condition(request, condition); return request; } - web::http::http_request get_file_properties(web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request get_file_properties(const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) { web::http::http_request request(base_request(web::http::methods::HEAD, uri_builder, timeout, context)); + add_access_condition(request, condition); return request; } - - web::http::http_request set_file_properties(const cloud_file_properties& properties, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) + + web::http::http_request set_file_properties(const cloud_file_properties& properties, const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) { uri_builder.append_query(core::make_query_parameter(uri_query_component, component_properties, /* do_encoding */ false)); web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); + //Note that setting file properties with a length won't resize the file. + //If resize is needed, user should call azure::storage::cloud_file::resize instead. add_file_properties(request, properties); + add_additional_properties(request, properties, file_operation_type::update); + add_access_condition(request, condition); + + return request; + } + web::http::http_request resize_with_properties(const cloud_file_properties & properties, const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds & timeout, operation_context context) + { + auto request = set_file_properties(properties, condition, uri_builder, timeout, context); + request.headers()[ms_header_content_length] = core::convert_to_string(properties.length()); return request; } - - web::http::http_request set_file_metadata(const cloud_metadata& metadata, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) + + web::http::http_request set_file_metadata(const cloud_metadata& metadata, const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) { uri_builder.append_query(core::make_query_parameter(uri_query_component, component_metadata, /* do_encoding */ false)); web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); add_metadata(request, metadata); + add_access_condition(request, condition); return request; } - - web::http::http_request copy_file(const web::http::uri& source, const cloud_metadata& metadata, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request copy_file(const web::http::uri& source, const cloud_metadata& metadata, const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) { web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); request.headers().add(ms_header_copy_source, source.to_string()); add_metadata(request, metadata); - + add_access_condition(request, condition); return request; } - web::http::http_request copy_file_from_blob(const web::http::uri& source, const access_condition& condition, const cloud_metadata& metadata, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request copy_file_from_blob(const web::http::uri& source, const access_condition& condition, const cloud_metadata& metadata, const file_access_condition& file_condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) { web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); request.headers().add(ms_header_copy_source, source.to_string()); add_source_access_condition(request, condition); add_metadata(request, metadata); - + add_access_condition(request, file_condition); return request; } - web::http::http_request abort_copy_file(const utility::string_t& copy_id, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request abort_copy_file(const utility::string_t& copy_id, const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) { uri_builder.append_query(core::make_query_parameter(uri_query_component, component_copy, /* do_encoding */ false)); uri_builder.append_query(core::make_query_parameter(uri_query_copy_id, copy_id, /* do_encoding */ false)); web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); request.headers().add(ms_header_copy_action, header_value_copy_abort); + add_access_condition(request, condition); return request; } - web::http::http_request list_file_ranges(utility::size64_t start_offset, utility::size64_t length, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request list_file_ranges(utility::size64_t start_offset, utility::size64_t length, const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) { uri_builder.append_query(core::make_query_parameter(uri_query_component, component_range_list, /* do_encoding */ false)); - + web::http::http_request request(base_request(web::http::methods::GET, uri_builder, timeout, context)); add_file_range(request, start_offset, length); - + add_access_condition(request, condition); return request; } - web::http::http_request put_file_range(file_range range, file_range_write write, utility::string_t content_md5, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request put_file_range(file_range range, file_range_write write, utility::string_t content_md5, const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) { uri_builder.append_query(core::make_query_parameter(uri_query_component, component_range, /* do_encoding */ false)); web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); - + web::http::http_headers& headers = request.headers(); headers.add(ms_header_range, range.to_string()); @@ -326,10 +522,11 @@ namespace azure { namespace storage { namespace protocol { headers.add(_XPLATSTR("x-ms-write"), _XPLATSTR("clear")); break; } + add_access_condition(request, condition); return request; } - web::http::http_request get_file(utility::size64_t start_offset, utility::size64_t length, bool md5_validation, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request get_file(utility::size64_t start_offset, utility::size64_t length, bool md5_validation, const file_access_condition& condition, web::http::uri_builder uri_builder, const std::chrono::seconds& timeout, operation_context context) { web::http::http_request request(base_request(web::http::methods::GET, uri_builder, timeout, context)); web::http::http_headers& headers = request.headers(); @@ -337,8 +534,31 @@ namespace azure { namespace storage { namespace protocol { if (start_offset < std::numeric_limits::max() && md5_validation) { - headers.add(_XPLATSTR("x-ms-range-get-content-md5"), _XPLATSTR("true")); + headers.add(ms_header_range_get_content_md5, header_value_true); + } + add_access_condition(request, condition); + return request; + } + + web::http::http_request lease_file(const utility::string_t& lease_action, const utility::string_t& proposed_lease_id, const file_access_condition& condition, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) + { + uri_builder.append_query(core::make_query_parameter(uri_query_component, component_lease, /* do_encoding */ false)); + web::http::http_request request(base_request(web::http::methods::PUT, uri_builder, timeout, context)); + + web::http::http_headers& headers = request.headers(); + headers.add(ms_header_lease_action, lease_action); + if (lease_action == header_value_lease_acquire) + { + headers.add(ms_header_lease_duration, "-1"); + add_optional_header(headers, ms_header_lease_proposed_id, proposed_lease_id); } + else if (lease_action == header_value_lease_change) + { + add_optional_header(headers, ms_header_lease_proposed_id, proposed_lease_id); + } + + add_access_condition(request, condition); + return request; } -}}} \ No newline at end of file +}}} diff --git a/Microsoft.WindowsAzure.Storage/src/file_response_parsers.cpp b/Microsoft.WindowsAzure.Storage/src/file_response_parsers.cpp index e0ec5d64..64d63495 100644 --- a/Microsoft.WindowsAzure.Storage/src/file_response_parsers.cpp +++ b/Microsoft.WindowsAzure.Storage/src/file_response_parsers.cpp @@ -17,6 +17,9 @@ #include "stdafx.h" #include "wascore/protocol.h" +#include "wascore/constants.h" + +#include "cpprest/asyncrt_utils.h" namespace azure { namespace storage { namespace protocol { @@ -26,6 +29,10 @@ namespace azure { namespace storage { namespace protocol { properties.m_quota = parse_quota(response); properties.m_etag = parse_etag(response); properties.m_last_modified = parse_last_modified(response); + properties.m_next_allowed_quota_downgrade_time = parse_datetime_rfc1123(get_header_value(response.headers(), ms_header_share_next_allowed_quota_downgrade_time)); + response.headers().match(ms_header_share_provisioned_egress_mbps, properties.m_provisioned_egress); + response.headers().match(ms_header_share_provisioned_ingress_mbps, properties.m_provisioned_ingress); + response.headers().match(ms_header_share_provisioned_iops, properties.m_provisioned_iops); return properties; } @@ -34,6 +41,15 @@ namespace azure { namespace storage { namespace protocol { cloud_file_directory_properties properties; properties.m_etag = parse_etag(response); properties.m_last_modified = parse_last_modified(response); + const auto& headers = response.headers(); + properties.m_server_encrypted = response_parsers::parse_boolean(get_header_value(headers, ms_header_server_encrypted)); + properties.set_permission_key(get_header_value(headers, ms_header_file_permission_key)); + properties.m_attributes = parse_file_attributes(get_header_value(headers, ms_header_file_attributes)); + properties.set_creation_time(parse_datetime_iso8601(get_header_value(headers, ms_header_file_creation_time))); + properties.set_last_write_time(parse_datetime_iso8601(get_header_value(headers, ms_header_file_last_write_time))); + properties.m_change_time = parse_datetime_iso8601(get_header_value(headers, ms_header_file_change_time)); + properties.m_file_id = get_header_value(headers, ms_header_file_id); + properties.m_parent_id = get_header_value(headers, ms_header_file_parent_id); return properties; } @@ -46,7 +62,7 @@ namespace azure { namespace storage { namespace protocol { { auto slash = value.find(_XPLATSTR('/')); value = value.substr(slash + 1); - return utility::conversions::scan_string(value); + return utility::conversions::details::scan_string(value); } return headers.content_length(); @@ -59,14 +75,31 @@ namespace azure { namespace storage { namespace protocol { properties.m_last_modified = parse_last_modified(response); properties.m_length = parse_file_size(response); - auto& headers = response.headers(); + const auto& headers = response.headers(); properties.m_cache_control = get_header_value(headers, web::http::header_names::cache_control); properties.m_content_disposition = get_header_value(headers, header_content_disposition); properties.m_content_encoding = get_header_value(headers, web::http::header_names::content_encoding); properties.m_content_language = get_header_value(headers, web::http::header_names::content_language); - properties.m_content_md5 = get_header_value(headers, web::http::header_names::content_md5); properties.m_content_type = get_header_value(headers, web::http::header_names::content_type); properties.m_type = get_header_value(headers, _XPLATSTR("x-ms-file-type")); + properties.m_server_encrypted = response_parsers::parse_boolean(get_header_value(headers, ms_header_server_encrypted)); + // When content_range is not empty, it means the request is Get File with range specified, then 'Content-MD5' header should not be used. + properties.m_content_md5 = get_header_value(headers, ms_header_content_md5); + if (properties.m_content_md5.empty() && get_header_value(headers, web::http::header_names::content_range).empty()) + { + properties.m_content_md5 = get_header_value(headers, web::http::header_names::content_md5); + } + properties.set_permission_key(get_header_value(headers, ms_header_file_permission_key)); + properties.m_attributes = parse_file_attributes(get_header_value(headers, ms_header_file_attributes)); + properties.set_creation_time(parse_datetime_iso8601(get_header_value(headers, ms_header_file_creation_time))); + properties.set_last_write_time(parse_datetime_iso8601(get_header_value(headers, ms_header_file_last_write_time))); + properties.m_change_time = parse_datetime_iso8601(get_header_value(headers, ms_header_file_change_time)); + properties.m_file_id = get_header_value(headers, ms_header_file_id); + properties.m_parent_id = get_header_value(headers, ms_header_file_parent_id); + + properties.m_lease_status = parse_lease_status(response); + properties.m_lease_state = parse_lease_state(response); + properties.m_lease_duration = parse_lease_duration(response); return properties; } diff --git a/Microsoft.WindowsAzure.Storage/src/hashing.cpp b/Microsoft.WindowsAzure.Storage/src/hashing.cpp index cd187252..ee9abed1 100644 --- a/Microsoft.WindowsAzure.Storage/src/hashing.cpp +++ b/Microsoft.WindowsAzure.Storage/src/hashing.cpp @@ -21,37 +21,18 @@ namespace azure { namespace storage { namespace core { #ifdef _WIN32 - - cryptography_hash_algorithm::cryptography_hash_algorithm(LPCWSTR algorithm_id, ULONG flags) - { - NTSTATUS status = BCryptOpenAlgorithmProvider(&m_algorithm_handle, algorithm_id, NULL, flags); - if (status != 0) - { - throw utility::details::create_system_error(status); - } - } - - cryptography_hash_algorithm::~cryptography_hash_algorithm() - { - BCryptCloseAlgorithmProvider(m_algorithm_handle, 0); - } - - hmac_sha256_hash_algorithm hmac_sha256_hash_algorithm::m_instance; - - md5_hash_algorithm md5_hash_algorithm::m_instance; - - cryptography_hash_provider_impl::cryptography_hash_provider_impl(const cryptography_hash_algorithm& algorithm, const std::vector& key) + cryptography_hash_provider_impl::cryptography_hash_provider_impl(BCRYPT_HANDLE algorithm_handle, const std::vector& key) { DWORD hash_object_size = 0; DWORD data_length = 0; - NTSTATUS status = BCryptGetProperty(algorithm, BCRYPT_OBJECT_LENGTH, (PBYTE)&hash_object_size, sizeof(DWORD), &data_length, 0); + NTSTATUS status = BCryptGetProperty(algorithm_handle, BCRYPT_OBJECT_LENGTH, (PBYTE)&hash_object_size, sizeof(DWORD), &data_length, 0); if (status != 0) { throw utility::details::create_system_error(status); } m_hash_object.resize(hash_object_size); - status = BCryptCreateHash(algorithm, &m_hash_handle, (PUCHAR)m_hash_object.data(), (ULONG)m_hash_object.size(), (PUCHAR)key.data(), (ULONG)key.size(), 0); + status = BCryptCreateHash(algorithm_handle, &m_hash_handle, (PUCHAR)m_hash_object.data(), (ULONG)m_hash_object.size(), (PUCHAR)key.data(), (ULONG)key.size(), 0); if (status != 0) { throw utility::details::create_system_error(status); @@ -93,51 +74,198 @@ namespace azure { namespace storage { namespace core { } } - hmac_sha256_hash_provider_impl::hmac_sha256_hash_provider_impl(const std::vector& key) - : cryptography_hash_provider_impl(hmac_sha256_hash_algorithm::instance(), key) + BCRYPT_ALG_HANDLE hmac_sha256_hash_provider_impl::algorithm_handle() { + static const BCRYPT_ALG_HANDLE alg_handle = []() { + BCRYPT_ALG_HANDLE handle; + NTSTATUS status = BCryptOpenAlgorithmProvider(&handle, BCRYPT_SHA256_ALGORITHM, NULL, BCRYPT_ALG_HANDLE_HMAC_FLAG); + if (status != 0) + { + throw utility::details::create_system_error(status); + } + return handle; + }(); + + return alg_handle; } - md5_hash_provider_impl::md5_hash_provider_impl() - : cryptography_hash_provider_impl(md5_hash_algorithm::instance(), std::vector()) + hmac_sha256_hash_provider_impl::hmac_sha256_hash_provider_impl(const std::vector& key) : cryptography_hash_provider_impl(algorithm_handle(), key) + { + } + + hmac_sha256_hash_provider_impl::~hmac_sha256_hash_provider_impl() + { + } + + void hmac_sha256_hash_provider_impl::write(const uint8_t* data, size_t count) + { + cryptography_hash_provider_impl::write(data, count); + } + + void hmac_sha256_hash_provider_impl::close() + { + cryptography_hash_provider_impl::close(); + } + + BCRYPT_ALG_HANDLE md5_hash_provider_impl::algorithm_handle() + { + static const BCRYPT_ALG_HANDLE alg_handle = []() { + BCRYPT_ALG_HANDLE handle; + NTSTATUS status = BCryptOpenAlgorithmProvider(&handle, BCRYPT_MD5_ALGORITHM, NULL, 0); + if (status != 0) + { + throw utility::details::create_system_error(status); + } + return handle; + }(); + + return alg_handle; + } + + md5_hash_provider_impl::md5_hash_provider_impl() : cryptography_hash_provider_impl(algorithm_handle(), std::vector()) + { + } + + md5_hash_provider_impl::~md5_hash_provider_impl() + { + } + + void md5_hash_provider_impl::write(const uint8_t* data, size_t count) + { + cryptography_hash_provider_impl::write(data, count); + } + + void md5_hash_provider_impl::close() { + cryptography_hash_provider_impl::close(); + } + + BCRYPT_ALG_HANDLE sha256_hash_provider_impl::algorithm_handle() + { + static const BCRYPT_ALG_HANDLE alg_handle = []() { + BCRYPT_ALG_HANDLE handle; + NTSTATUS status = BCryptOpenAlgorithmProvider(&handle, BCRYPT_SHA256_ALGORITHM, NULL, 0); + if (status != 0) + { + throw utility::details::create_system_error(status); + } + return handle; + }(); + + return alg_handle; + } + + sha256_hash_provider_impl::sha256_hash_provider_impl() : cryptography_hash_provider_impl(algorithm_handle(), std::vector()) + { + } + + sha256_hash_provider_impl::~sha256_hash_provider_impl() + { + } + + void sha256_hash_provider_impl::write(const uint8_t* data, size_t count) + { + cryptography_hash_provider_impl::write(data, count); + } + + void sha256_hash_provider_impl::close() + { + cryptography_hash_provider_impl::close(); } #else // Linux hmac_sha256_hash_provider_impl::hmac_sha256_hash_provider_impl(const std::vector& key) { - HMAC_CTX_init(&m_hash_context); - HMAC_Init_ex(&m_hash_context, &key[0], (int) key.size(), EVP_sha256(), NULL); + #if OPENSSL_VERSION_NUMBER < 0x10100000L || defined (LIBRESSL_VERSION_NUMBER) + m_hash_context = (HMAC_CTX*) OPENSSL_malloc(sizeof(*m_hash_context)); + memset(m_hash_context, 0, sizeof(*m_hash_context)); + HMAC_CTX_init(m_hash_context); + #else + m_hash_context = HMAC_CTX_new(); + HMAC_CTX_reset(m_hash_context); + #endif + HMAC_Init_ex(m_hash_context, &key[0], (int) key.size(), EVP_sha256(), NULL); } - void hmac_sha256_hash_provider_impl::write(const uint8_t* data, size_t count) + hmac_sha256_hash_provider_impl::~hmac_sha256_hash_provider_impl() { - HMAC_Update(&m_hash_context, data, count); + if (m_hash_context != nullptr) + { +#if OPENSSL_VERSION_NUMBER < 0x10100000L || defined (LIBRESSL_VERSION_NUMBER) + OPENSSL_free(m_hash_context); +#else + HMAC_CTX_free(m_hash_context); +#endif + } + } + + void hmac_sha256_hash_provider_impl::write(const uint8_t* data, size_t count) + { + HMAC_Update(m_hash_context, data, count); } void hmac_sha256_hash_provider_impl::close() { unsigned int length = SHA256_DIGEST_LENGTH; m_hash.resize(length); - HMAC_Final(&m_hash_context, &m_hash[0], &length); - HMAC_CTX_cleanup(&m_hash_context); + HMAC_Final(m_hash_context, &m_hash[0], &length); +#if OPENSSL_VERSION_NUMBER < 0x10100000L || defined (LIBRESSL_VERSION_NUMBER) + HMAC_CTX_cleanup(m_hash_context); +#endif + } md5_hash_provider_impl::md5_hash_provider_impl() { - MD5_Init(&m_hash_context); + m_hash_context =(MD5_CTX*) OPENSSL_malloc(sizeof(MD5_CTX)); + memset(m_hash_context, 0, sizeof(*m_hash_context)); + MD5_Init(m_hash_context); + } + + md5_hash_provider_impl::~md5_hash_provider_impl() + { + if (m_hash_context != nullptr) + { + OPENSSL_free(m_hash_context); + } } void md5_hash_provider_impl::write(const uint8_t* data, size_t count) { - MD5_Update(&m_hash_context, data, count); + MD5_Update(m_hash_context, data, count); } void md5_hash_provider_impl::close() { m_hash.resize(MD5_DIGEST_LENGTH); - MD5_Final(m_hash.data(), &m_hash_context); + MD5_Final(m_hash.data(), m_hash_context); + } + + sha256_hash_provider_impl::sha256_hash_provider_impl() + { + m_hash_context = (SHA256_CTX*)OPENSSL_malloc(sizeof(SHA256_CTX)); + memset(m_hash_context, 0, sizeof(*m_hash_context)); + SHA256_Init(m_hash_context); + } + + sha256_hash_provider_impl::~sha256_hash_provider_impl() + { + if (m_hash_context != nullptr) + { + OPENSSL_free(m_hash_context); + } + } + + void sha256_hash_provider_impl::write(const uint8_t* data, size_t count) + { + SHA256_Update(m_hash_context, data, count); + } + + void sha256_hash_provider_impl::close() + { + m_hash.resize(SHA256_DIGEST_LENGTH); + SHA256_Final(m_hash.data(), m_hash_context); } #endif diff --git a/Microsoft.WindowsAzure.Storage/src/navigation.cpp b/Microsoft.WindowsAzure.Storage/src/navigation.cpp index 1b7a3c35..b7561ba0 100644 --- a/Microsoft.WindowsAzure.Storage/src/navigation.cpp +++ b/Microsoft.WindowsAzure.Storage/src/navigation.cpp @@ -266,7 +266,7 @@ namespace azure { namespace storage { namespace core { storage_credentials parsed_credentials = protocol::parse_query(uri, require_signed_resource); if (parsed_credentials.is_sas()) { - if (credentials.is_shared_key() || credentials.is_sas()) + if (credentials.is_shared_key() || credentials.is_sas() || credentials.is_bearer_token()) { throw std::invalid_argument(protocol::error_multiple_credentials); } @@ -323,7 +323,7 @@ namespace azure { namespace storage { namespace core { } web::http::uri_builder builder(uri); - builder.append_path(path, true); + builder.append_path_raw(path, true); return builder.to_uri(); } diff --git a/Microsoft.WindowsAzure.Storage/src/protocol_json.cpp b/Microsoft.WindowsAzure.Storage/src/protocol_json.cpp index 72f716d9..678cf882 100644 --- a/Microsoft.WindowsAzure.Storage/src/protocol_json.cpp +++ b/Microsoft.WindowsAzure.Storage/src/protocol_json.cpp @@ -244,4 +244,26 @@ namespace azure { namespace storage { namespace protocol { return storage_extended_error(std::move(error_code), std::move(error_message), std::move(details)); } + utility::string_t parse_file_permission(const web::json::value& document) + { + if (document.is_object()) + { + const web::json::object& result_obj = document.as_object(); + + web::json::object::const_iterator iter = result_obj.find(json_file_permission); + if (iter != result_obj.end()) + { + return iter->second.as_string(); + } + } + return utility::string_t(); + } + + utility::string_t construct_file_permission(const utility::string_t& value) + { + web::json::value obj; + obj[json_file_permission] = web::json::value::string(value); + return obj.serialize(); + } + }}} // namespace azure::storage::protocol diff --git a/Microsoft.WindowsAzure.Storage/src/protocol_xml.cpp b/Microsoft.WindowsAzure.Storage/src/protocol_xml.cpp index 105a393f..baddef03 100644 --- a/Microsoft.WindowsAzure.Storage/src/protocol_xml.cpp +++ b/Microsoft.WindowsAzure.Storage/src/protocol_xml.cpp @@ -18,6 +18,7 @@ #include "stdafx.h" #include "wascore/protocol.h" #include "wascore/protocol_xml.h" +#include "wascore/util.h" namespace azure { namespace storage { namespace protocol { @@ -74,7 +75,7 @@ namespace azure { namespace storage { namespace protocol { { if (element_name == xml_last_modified) { - m_properties.m_last_modified = parse_last_modified(get_current_element_text()); + m_properties.m_last_modified = parse_datetime_rfc1123(get_current_element_text()); return; } @@ -101,6 +102,11 @@ namespace azure { namespace storage { namespace protocol { m_properties.m_lease_duration = parse_lease_duration(get_current_element_text()); return; } + + if (element_name == xml_public_access) + { + m_properties.m_public_access = parse_public_access_type(get_current_element_text()); + } } if (element_name == xml_name) @@ -167,7 +173,7 @@ namespace azure { namespace storage { namespace protocol { { if (element_name == xml_last_modified) { - m_properties.m_last_modified = parse_last_modified(get_current_element_text()); + m_properties.m_last_modified = parse_datetime_rfc1123(get_current_element_text()); return; } @@ -279,7 +285,7 @@ namespace azure { namespace storage { namespace protocol { if (element_name == xml_copy_completion_time) { - m_copy_state.m_completion_time = response_parsers::parse_copy_completion_time(get_current_element_text()); + m_copy_state.m_completion_time = response_parsers::parse_datetime(get_current_element_text()); return; } @@ -288,6 +294,34 @@ namespace azure { namespace storage { namespace protocol { m_copy_state.m_status_description = get_current_element_text(); return; } + + if (element_name == xml_incremental_copy) + { + m_properties.m_is_incremental_copy = response_parsers::parse_boolean(get_current_element_text()); + return; + } + + if (element_name == xml_copy_destination_snapshot) + { + m_copy_state.m_destination_snapshot_time = response_parsers::parse_datetime(get_current_element_text(), utility::datetime::date_format::ISO_8601); + } + + if (element_name == xml_access_tier) + { + auto current_text = get_current_element_text(); + m_properties.m_standard_blob_tier = response_parsers::parse_standard_blob_tier(current_text); + m_properties.m_premium_blob_tier = response_parsers::parse_premium_blob_tier(current_text); + } + + if (element_name == xml_access_tier_inferred) + { + m_properties.m_access_tier_inferred = response_parsers::parse_boolean(get_current_element_text()); + } + + if (element_name == xml_access_tier_change_time) + { + m_properties.m_access_tier_change_time = response_parsers::parse_datetime(get_current_element_text()); + } } if (element_name == xml_snapshot) @@ -296,6 +330,18 @@ namespace azure { namespace storage { namespace protocol { return; } + if (element_name == xml_version_id) + { + m_properties.m_version_id = get_current_element_text(); + return; + } + + if (element_name == xml_is_current_version) + { + m_is_current_version = response_parsers::parse_boolean(get_current_element_text()); + return; + } + if (element_name == xml_name) { m_name = get_current_element_text(); @@ -316,10 +362,11 @@ namespace azure { namespace storage { namespace protocol { { if (element_name == xml_blob) { - m_blob_items.push_back(cloud_blob_list_item(std::move(m_uri), std::move(m_name), std::move(m_snapshot_time), std::move(m_metadata), std::move(m_properties), std::move(m_copy_state))); + m_blob_items.push_back(cloud_blob_list_item(std::move(m_uri), std::move(m_name), std::move(m_snapshot_time), m_is_current_version, std::move(m_metadata), std::move(m_properties), std::move(m_copy_state))); m_uri = web::uri(); m_name = utility::string_t(); m_snapshot_time = utility::string_t(); + m_is_current_version = false; m_metadata = azure::storage::cloud_metadata(); m_properties = azure::storage::cloud_blob_properties(); m_copy_state = azure::storage::copy_state(); @@ -730,7 +777,7 @@ namespace azure { namespace storage { namespace protocol { { if (element_name == xml_last_modified) { - m_properties.m_last_modified = parse_last_modified(get_current_element_text()); + m_properties.m_last_modified = parse_datetime_rfc1123(get_current_element_text()); return; } @@ -745,6 +792,30 @@ namespace azure { namespace storage { namespace protocol { extract_current_element(m_properties.m_quota); return; } + + if (element_name == xml_provisioned_iops) + { + extract_current_element(m_properties.m_provisioned_iops); + return; + } + + if (element_name == xml_provisioned_ingress_mbps) + { + extract_current_element(m_properties.m_provisioned_ingress); + return; + } + + if (element_name == xml_provisioned_egress_mpbs) + { + extract_current_element(m_properties.m_provisioned_egress); + return; + } + + if (element_name == xml_next_allowed_quota_downgrade_time) + { + m_properties.m_next_allowed_quota_downgrade_time = parse_datetime_rfc1123(get_current_element_text());; + return; + } } if (element_name == xml_name) @@ -782,7 +853,7 @@ namespace azure { namespace storage { namespace protocol { void get_share_stats_reader::handle_element(const utility::string_t& element_name) { - if (element_name == _XPLATSTR("ShareUsage")) + if (element_name == _XPLATSTR("ShareUsageBytes")) { extract_current_element(m_quota); return; @@ -815,6 +886,10 @@ namespace azure { namespace storage { namespace protocol { { m_directory_path = get_current_element_text(); } + else if (current_element_name == xml_file_id) + { + m_directory_file_id = get_current_element_text(); + } } while (move_to_next_attribute()); } } @@ -831,21 +906,15 @@ namespace azure { namespace storage { namespace protocol { } } - if (element_name == _XPLATSTR("File")) + if (element_name == xml_name) { - m_is_file = true; + m_name = get_current_element_text(); return; } - if (element_name == _XPLATSTR("Directory")) + if (element_name == xml_file_id) { - m_is_file = false; - return; - } - - if (element_name == xml_name) - { - m_name = get_current_element_text(); + m_file_id = get_current_element_text(); return; } @@ -861,15 +930,14 @@ namespace azure { namespace storage { namespace protocol { if ((element_name == _XPLATSTR("File") || element_name == _XPLATSTR("Directory")) && get_parent_element_name() == _XPLATSTR("Entries")) { // End of the data for a file or directory. Create an item and add it to the list - if (element_name == _XPLATSTR("File")) - { - m_is_file = true; - } - m_items.push_back(list_file_and_directory_item(m_is_file, std::move(m_name), m_size)); + bool is_file = element_name == _XPLATSTR("File"); + list_file_and_directory_item new_item(is_file, std::move(m_name), m_size); + new_item.set_file_id(std::move(m_file_id)); + m_items.emplace_back(std::move(new_item)); - m_is_file = false; m_name = utility::string_t(); m_size = 0; + m_file_id = utility::string_t(); } } @@ -1027,4 +1095,50 @@ namespace azure { namespace storage { namespace protocol { } } + std::string user_delegation_key_time_writer::write(const utility::datetime& start, const utility::datetime& expiry) + { + std::ostringstream outstream; + initialize(outstream); + + write_start_element(xml_user_delegation_key_info); + write_element(xml_user_delegation_key_start, core::convert_to_iso8601_string(start, 0)); + write_element(xml_user_delegation_key_expiry, core::convert_to_iso8601_string(expiry, 0)); + write_end_element(); + + finalize(); + return outstream.str(); + } + + void user_delegation_key_reader::handle_element(const utility::string_t& element_name) + { + if (element_name == xml_user_delegation_key_signed_oid) + { + extract_current_element(m_key.signed_oid); + } + else if (element_name == xml_user_delegation_key_signed_tid) + { + extract_current_element(m_key.signed_tid); + } + else if (element_name == xml_user_delegation_key_signed_start) + { + m_key.signed_start = utility::datetime::from_string(get_current_element_text(), utility::datetime::ISO_8601); + } + else if (element_name == xml_user_delegation_key_signed_expiry) + { + m_key.signed_expiry = utility::datetime::from_string(get_current_element_text(), utility::datetime::ISO_8601); + } + else if (element_name == xml_user_delegation_key_signed_service) + { + extract_current_element(m_key.signed_service); + } + else if (element_name == xml_user_delegation_key_signed_version) + { + extract_current_element(m_key.signed_version); + } + else if (element_name == xml_user_delegation_key_value) + { + extract_current_element(m_key.key); + } + } + }}} // namespace azure::storage::protocol diff --git a/Microsoft.WindowsAzure.Storage/src/queue_request_factory.cpp b/Microsoft.WindowsAzure.Storage/src/queue_request_factory.cpp index 6d85db72..33991134 100644 --- a/Microsoft.WindowsAzure.Storage/src/queue_request_factory.cpp +++ b/Microsoft.WindowsAzure.Storage/src/queue_request_factory.cpp @@ -155,7 +155,7 @@ namespace azure { namespace storage { namespace protocol { web::http::http_request add_message(const cloud_queue_message& message, std::chrono::seconds time_to_live, std::chrono::seconds initial_visibility_timeout, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) { - if (time_to_live.count() >= 0LL && time_to_live.count() != 604800LL) + if (time_to_live.count() >= -1LL && time_to_live.count() != 604800LL) { uri_builder.append_query(core::make_query_parameter(_XPLATSTR("messagettl"), time_to_live.count(), /* do_encoding */ false)); } diff --git a/Microsoft.WindowsAzure.Storage/src/request_factory.cpp b/Microsoft.WindowsAzure.Storage/src/request_factory.cpp index a20e0ae6..6d4127ae 100644 --- a/Microsoft.WindowsAzure.Storage/src/request_factory.cpp +++ b/Microsoft.WindowsAzure.Storage/src/request_factory.cpp @@ -68,6 +68,14 @@ namespace azure { namespace storage { namespace protocol { return request; } + web::http::http_request get_account_properties(web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) + { + uri_builder.append_query(uri_query_resource_type, resource_account); + uri_builder.append_query(uri_query_component, component_properties); + web::http::http_request request(base_request(web::http::methods::GET, uri_builder, timeout, context)); + return request; + } + void add_optional_header(web::http::http_headers& headers, const utility::string_t& header, const utility::string_t& value) { if (!value.empty()) @@ -81,12 +89,23 @@ namespace azure { namespace storage { namespace protocol { web::http::http_headers& headers = request.headers(); for (cloud_metadata::const_iterator it = metadata.cbegin(); it != metadata.cend(); ++it) { + if (core::has_whitespace_or_empty(it->first)) + { + throw std::invalid_argument(protocol::error_empty_whitespace_metadata_name); + } if (core::is_empty_or_whitespace(it->second)) { throw std::invalid_argument(protocol::error_empty_metadata_value); } - - headers.add(ms_header_metadata_prefix + it->first, it->second); + if (isspace(*it->second.begin()) || isspace(*it->second.rbegin())) + { + headers.add(ms_header_metadata_prefix + it->first, core::str_trim_starting_trailing_whitespaces(it->second)); + } + else + { + headers.add(ms_header_metadata_prefix + it->first, it->second); + } + } } diff --git a/Microsoft.WindowsAzure.Storage/src/request_result.cpp b/Microsoft.WindowsAzure.Storage/src/request_result.cpp index bd7721ce..48e6d718 100644 --- a/Microsoft.WindowsAzure.Storage/src/request_result.cpp +++ b/Microsoft.WindowsAzure.Storage/src/request_result.cpp @@ -29,7 +29,8 @@ namespace azure { namespace storage { m_target_location(target_location), m_end_time(utility::datetime::utc_now()), m_http_status_code(response.status_code()), - m_content_length(std::numeric_limits::max()) + m_content_length(std::numeric_limits::max()), + m_request_server_encrypted(false) { parse_headers(response.headers()); if (parse_body_as_error) @@ -45,7 +46,8 @@ namespace azure { namespace storage { m_end_time(utility::datetime::utc_now()), m_http_status_code(http_status_code), m_extended_error(std::move(extended_error)), - m_content_length(std::numeric_limits::max()) + m_content_length(std::numeric_limits::max()), + m_request_server_encrypted(false) { parse_headers(response.headers()); } @@ -55,6 +57,7 @@ namespace azure { namespace storage { headers.match(protocol::ms_header_request_id, m_service_request_id); headers.match(web::http::header_names::content_length, m_content_length); headers.match(web::http::header_names::content_md5, m_content_md5); + headers.match(protocol::ms_header_content_crc64, m_content_crc64); headers.match(web::http::header_names::etag, m_etag); utility::string_t request_server_encrypted; diff --git a/Microsoft.WindowsAzure.Storage/src/resources.cpp b/Microsoft.WindowsAzure.Storage/src/resources.cpp index 36e539f3..0d14250b 100644 --- a/Microsoft.WindowsAzure.Storage/src/resources.cpp +++ b/Microsoft.WindowsAzure.Storage/src/resources.cpp @@ -16,6 +16,7 @@ // ----------------------------------------------------------------------------------------- #include "stdafx.h" +#include "wascore/resources.h" #include "wascore/basic_types.h" namespace azure { namespace storage { namespace protocol { diff --git a/Microsoft.WindowsAzure.Storage/src/response_parsers.cpp b/Microsoft.WindowsAzure.Storage/src/response_parsers.cpp index 39fb08e0..196ab3b6 100644 --- a/Microsoft.WindowsAzure.Storage/src/response_parsers.cpp +++ b/Microsoft.WindowsAzure.Storage/src/response_parsers.cpp @@ -22,6 +22,8 @@ #include "wascore/constants.h" #include "wascore/resources.h" +#include "cpprest/asyncrt_utils.h" + namespace azure { namespace storage { namespace protocol { void preprocess_response_void(const web::http::http_response& response, const request_result& result, operation_context context) @@ -54,7 +56,12 @@ namespace azure { namespace storage { namespace protocol { return get_header_value(response, web::http::header_names::etag); } - utility::datetime parse_last_modified(const utility::string_t& value) + utility::datetime parse_datetime_iso8601(const utility::string_t& value) + { + return utility::datetime::from_string(value, utility::datetime::date_format::ISO_8601); + } + + utility::datetime parse_datetime_rfc1123(const utility::string_t& value) { return utility::datetime::from_string(value, utility::datetime::date_format::RFC_1123); } @@ -64,7 +71,7 @@ namespace azure { namespace storage { namespace protocol { utility::string_t value; if (response.headers().match(web::http::header_names::last_modified, value)) { - return parse_last_modified(value); + return parse_datetime_rfc1123(value); } else { @@ -157,7 +164,7 @@ namespace azure { namespace storage { namespace protocol { utility::string_t value; if (response.headers().match(ms_header_lease_time, value)) { - int64_t seconds = utility::conversions::scan_string(value); + int64_t seconds = utility::conversions::details::scan_string(value); return std::chrono::seconds(seconds); } else @@ -166,12 +173,61 @@ namespace azure { namespace storage { namespace protocol { } } + cloud_file_attributes parse_file_attributes(const utility::string_t& value) + { + cloud_file_attributes attributes = static_cast(0); + for (const auto& attribute : core::string_split(value, header_value_file_attribute_delimiter)) + { + if (attribute == header_value_file_attribute_none) + { + attributes |= cloud_file_attributes::none; + } + else if (attribute == header_value_file_attribute_readonly) + { + attributes |= cloud_file_attributes::readonly; + } + else if (attribute == header_value_file_attribute_hidden) + { + attributes |= cloud_file_attributes::hidden; + } + else if (attribute == header_value_file_attribute_system) + { + attributes |= cloud_file_attributes::system; + } + else if (attribute == header_value_file_attribute_directory) + { + attributes |= cloud_file_attributes::directory; + } + else if (attribute == header_value_file_attribute_archive) + { + attributes |= cloud_file_attributes::archive; + } + else if (attribute == header_value_file_attribute_temporary) + { + attributes |= cloud_file_attributes::temporary; + } + else if (attribute == header_value_file_attribute_offline) + { + attributes |= cloud_file_attributes::offline; + } + else if (attribute == header_value_file_attribute_notcontentindexed) + { + attributes |= cloud_file_attributes::not_content_indexed; + } + else if (attribute == header_value_file_attribute_noscrubdata) + { + attributes |= cloud_file_attributes::no_scrub_data; + } + } + return attributes; + } + int parse_approximate_messages_count(const web::http::http_response& response) { utility::string_t value; if (response.headers().match(ms_header_approximate_messages_count, value)) { - return utility::conversions::scan_string(value); + return utility::conversions::details::scan_string(value); } return -1; @@ -232,19 +288,25 @@ namespace azure { namespace storage { namespace protocol { state.m_status = parse_copy_status(status); state.m_copy_id = get_header_value(headers, ms_header_copy_id); state.m_source = get_header_value(headers, ms_header_copy_source); - state.m_completion_time = parse_copy_completion_time(get_header_value(headers, ms_header_copy_completion_time)); + state.m_completion_time = parse_datetime(get_header_value(headers, ms_header_copy_completion_time)); state.m_status_description = get_header_value(headers, ms_header_copy_status_description); + state.m_destination_snapshot_time = parse_datetime(get_header_value(headers, ms_header_copy_destination_snapshot), utility::datetime::date_format::ISO_8601); parse_copy_progress(get_header_value(headers, ms_header_copy_progress), state.m_bytes_copied, state.m_total_bytes); } return state; } - utility::datetime response_parsers::parse_copy_completion_time(const utility::string_t& value) + bool response_parsers::parse_boolean(const utility::string_t& value) + { + return value == _XPLATSTR("true"); + } + + utility::datetime response_parsers::parse_datetime(const utility::string_t& value, utility::datetime::date_format format) { if (!value.empty()) { - return utility::datetime::from_string(value, utility::datetime::date_format::RFC_1123); + return utility::datetime::from_string(value, format); } else { @@ -328,4 +390,85 @@ namespace azure { namespace storage { namespace protocol { } } + blob_container_public_access_type parse_public_access_type(const utility::string_t& value) + { + if (value == resource_blob) + { + return blob_container_public_access_type::blob; + } + else if (value == resource_container) + { + return blob_container_public_access_type::container; + } + else + { + return blob_container_public_access_type::off; + } + } + + blob_container_public_access_type parse_public_access_type(const web::http::http_response& response) + { + return parse_public_access_type(get_header_value(response.headers(), ms_header_blob_public_access)); + } + + premium_blob_tier response_parsers::parse_premium_blob_tier(const utility::string_t& value) + { + if (value == header_value_access_tier_p4) + { + return premium_blob_tier::p4; + } + else if (value == header_value_access_tier_p6) + { + return premium_blob_tier::p6; + } + else if (value == header_value_access_tier_p10) + { + return premium_blob_tier::p10; + } + else if (value == header_value_access_tier_p20) + { + return premium_blob_tier::p20; + } + else if (value == header_value_access_tier_p30) + { + return premium_blob_tier::p30; + } + else if (value == header_value_access_tier_p40) + { + return premium_blob_tier::p40; + } + else if (value == header_value_access_tier_p50) + { + return premium_blob_tier::p50; + } + else if (value == header_value_access_tier_p60) + { + return premium_blob_tier::p60; + } + else + { + return premium_blob_tier::unknown; + } + } + + standard_blob_tier response_parsers::parse_standard_blob_tier(const utility::string_t& value) + { + if (value == header_value_access_tier_hot) + { + return standard_blob_tier::hot; + } + else if (value == header_value_access_tier_cool) + { + return standard_blob_tier::cool; + } + else if (value == header_value_access_tier_archive) + { + return standard_blob_tier::archive; + } + else + { + return standard_blob_tier::unknown; + } + } + }}} // namespace azure::storage::protocol diff --git a/Microsoft.WindowsAzure.Storage/src/retry_policies.cpp b/Microsoft.WindowsAzure.Storage/src/retry_policies.cpp index d5d2b948..6e3477dd 100644 --- a/Microsoft.WindowsAzure.Storage/src/retry_policies.cpp +++ b/Microsoft.WindowsAzure.Storage/src/retry_policies.cpp @@ -42,6 +42,7 @@ namespace azure { namespace storage { retry_info basic_common_retry_policy::evaluate(const retry_context& retry_context, operation_context context) { UNREFERENCED_PARAMETER(context); + if (retry_context.current_retry_count() >= m_max_attempts) { return retry_info(); diff --git a/Microsoft.WindowsAzure.Storage/src/shared_access_signature.cpp b/Microsoft.WindowsAzure.Storage/src/shared_access_signature.cpp index 6d937e2f..da2ee9d7 100644 --- a/Microsoft.WindowsAzure.Storage/src/shared_access_signature.cpp +++ b/Microsoft.WindowsAzure.Storage/src/shared_access_signature.cpp @@ -39,11 +39,6 @@ namespace azure { namespace storage { namespace protocol { } } - utility::string_t convert_datetime_if_initialized(utility::datetime value) - { - return value.is_initialized() ? core::truncate_fractional_seconds(value).to_string(utility::datetime::ISO_8601) : utility::string_t(); - } - void log_sas_string_to_sign(const utility::string_t& string_to_sign) { operation_context context; @@ -67,8 +62,8 @@ namespace azure { namespace storage { namespace protocol { if (policy.is_valid()) { - add_query_if_not_empty(builder, uri_query_sas_start, convert_datetime_if_initialized(policy.start()), /* do_encoding */ true); - add_query_if_not_empty(builder, uri_query_sas_expiry, convert_datetime_if_initialized(policy.expiry()), /* do_encoding */ true); + add_query_if_not_empty(builder, uri_query_sas_start, core::convert_to_iso8601_string(policy.start(), 0), /* do_encoding */ true); + add_query_if_not_empty(builder, uri_query_sas_expiry, core::convert_to_iso8601_string(policy.expiry(), 0), /* do_encoding */ true); add_query_if_not_empty(builder, uri_query_sas_permissions, policy.permissions_to_string(), /* do_encoding */ true); } @@ -78,8 +73,8 @@ namespace azure { namespace storage { namespace protocol { void get_sas_string_to_sign(utility::string_t& str, const utility::string_t& identifier, const shared_access_policy& policy, const utility::string_t& resource) { str.append(policy.permissions_to_string()).append(_XPLATSTR("\n")); - str.append(convert_datetime_if_initialized(policy.start())).append(_XPLATSTR("\n")); - str.append(convert_datetime_if_initialized(policy.expiry())).append(_XPLATSTR("\n")); + str.append(core::convert_to_iso8601_string(policy.start(), 0)).append(_XPLATSTR("\n")); + str.append(core::convert_to_iso8601_string(policy.expiry(), 0)).append(_XPLATSTR("\n")); str.append(resource).append(_XPLATSTR("\n")); str.append(identifier).append(_XPLATSTR("\n")); str.append(policy.address_or_range().to_string()).append(_XPLATSTR("\n")); @@ -139,7 +134,7 @@ namespace azure { namespace storage { namespace protocol { #pragma region Blob SAS Helpers - utility::string_t get_blob_sas_string_to_sign(const utility::string_t& identifier, const shared_access_policy& policy, const cloud_blob_shared_access_headers& headers, const utility::string_t& resource, const storage_credentials& credentials) + utility::string_t get_blob_sas_string_to_sign(const utility::string_t& identifier, const shared_access_policy& policy, const cloud_blob_shared_access_headers& headers, const utility::string_t& resource_type, const utility::string_t& resource, const utility::string_t& snapshot_time, const storage_credentials& credentials) { //// StringToSign = signedpermissions + "\n" + //// signedstart + "\n" + @@ -149,6 +144,8 @@ namespace azure { namespace storage { namespace protocol { //// signedIP + "\n" + //// signedProtocol + "\n" + //// signedversion + "\n" + + //// signedResource + "\n" + + //// signedSnapshotTime + "\n" + //// cachecontrol + "\n" + //// contentdisposition + "\n" + //// contentencoding + "\n" + @@ -162,6 +159,8 @@ namespace azure { namespace storage { namespace protocol { utility::string_t string_to_sign; string_to_sign.reserve(256); get_sas_string_to_sign(string_to_sign, identifier, policy, resource); + string_to_sign.append(_XPLATSTR("\n")).append(resource_type); + string_to_sign.append(_XPLATSTR("\n")).append(snapshot_time); string_to_sign.append(_XPLATSTR("\n")).append(headers.cache_control()); string_to_sign.append(_XPLATSTR("\n")).append(headers.content_disposition()); string_to_sign.append(_XPLATSTR("\n")).append(headers.content_encoding()); @@ -170,12 +169,12 @@ namespace azure { namespace storage { namespace protocol { log_sas_string_to_sign(string_to_sign); - return calculate_hmac_sha256_hash(string_to_sign, credentials); + return calculate_hmac_sha256_hash(string_to_sign, credentials.account_key()); } - utility::string_t get_blob_sas_token(const utility::string_t& identifier, const shared_access_policy& policy, const cloud_blob_shared_access_headers& headers, const utility::string_t& resource_type, const utility::string_t& resource, const storage_credentials& credentials) + utility::string_t get_blob_sas_token(const utility::string_t& identifier, const shared_access_policy& policy, const cloud_blob_shared_access_headers& headers, const utility::string_t& resource_type, const utility::string_t& resource, const utility::string_t& snapshot_time, const storage_credentials& credentials) { - auto signature = get_blob_sas_string_to_sign(identifier, policy, headers, resource, credentials); + auto signature = get_blob_sas_string_to_sign(identifier, policy, headers, resource_type, resource, snapshot_time, credentials); auto builder = get_sas_token_builder(identifier, policy, signature); add_query_if_not_empty(builder, uri_query_sas_resource, resource_type, /* do_encoding */ true); @@ -188,6 +187,75 @@ namespace azure { namespace storage { namespace protocol { return builder.query(); } + utility::string_t get_blob_user_delegation_sas_token(const shared_access_policy& policy, const cloud_blob_shared_access_headers& headers, const utility::string_t& resource_type, const utility::string_t& resource, const utility::string_t& snapshot_time, const user_delegation_key& key) + { + const utility::string_t new_line = _XPLATSTR("\n"); + + //// StringToSign = signed permissions + "\n" + + //// signed start + "\n" + + //// signed expiry + "\n" + + //// canonicalized resource + "\n" + + //// signed key oid + "\n" + + //// signed key tid + "\n" + + //// signed keys tart + "\n" + + //// signed key expiry + "\n" + + //// signed key service + "\n" + + //// signed key version + "\n" + + //// signed IP + "\n" + + //// signed protocol + "\n" + + //// signed version + "\n" + + //// signed resource yype + "\n" + + //// signed snapshot time + "\n" + + //// cache control + "\n" + + //// content disposition + "\n" + + //// content encoding + "\n" + + //// content language + "\n" + + //// content type + //// + //// HMAC-SHA256(UTF8.Encode(StringToSign)) + + utility::string_t string_to_sign; + string_to_sign += policy.permissions_to_string() + new_line; + string_to_sign += core::convert_to_iso8601_string(policy.start(), 0) + new_line; + string_to_sign += core::convert_to_iso8601_string(policy.expiry(), 0) + new_line; + string_to_sign += resource + new_line; + string_to_sign += key.signed_oid + new_line; + string_to_sign += key.signed_tid + new_line; + string_to_sign += core::convert_to_iso8601_string(key.signed_start, 0) + new_line; + string_to_sign += core::convert_to_iso8601_string(key.signed_expiry, 0) + new_line; + string_to_sign += key.signed_service + new_line; + string_to_sign += key.signed_version + new_line; + string_to_sign += policy.address_or_range().to_string() + new_line; + string_to_sign += policy.protocols_to_string() + new_line; + string_to_sign += header_value_storage_version + new_line; + string_to_sign += resource_type + new_line; + string_to_sign += snapshot_time + new_line; + string_to_sign += headers.cache_control() + new_line; + string_to_sign += headers.content_disposition() + new_line; + string_to_sign += headers.content_encoding() + new_line; + string_to_sign += headers.content_language() + new_line; + string_to_sign += headers.content_type(); + + auto signature = calculate_hmac_sha256_hash(string_to_sign, utility::conversions::from_base64(key.key)); + + auto builder = get_sas_token_builder(utility::string_t(), policy, signature); + + add_query_if_not_empty(builder, uri_query_sas_resource, resource_type, /* do_encoding */ true); + add_query_if_not_empty(builder, uri_query_sas_cache_control, headers.cache_control(), /* do_encoding */ true); + add_query_if_not_empty(builder, uri_query_sas_content_type, headers.content_type(), /* do_encoding */ true); + add_query_if_not_empty(builder, uri_query_sas_content_encoding, headers.content_encoding(), /* do_encoding */ true); + add_query_if_not_empty(builder, uri_query_sas_content_language, headers.content_language(), /* do_encoding */ true); + add_query_if_not_empty(builder, uri_query_sas_content_disposition, headers.content_disposition(), /* do_encoding */ true); + add_query_if_not_empty(builder, uri_query_sas_skoid, key.signed_oid, /* do_encoding */ true); + add_query_if_not_empty(builder, uri_query_sas_sktid, key.signed_tid, /* do_encoding */ true); + add_query_if_not_empty(builder, uri_query_sas_skt, core::convert_to_iso8601_string(key.signed_start, 0), /* do_encoding */ true); + add_query_if_not_empty(builder, uri_query_sas_ske, core::convert_to_iso8601_string(key.signed_expiry, 0), /* do_encoding */ true); + add_query_if_not_empty(builder, uri_query_sas_sks, key.signed_service, /* do_encoding */ true); + add_query_if_not_empty(builder, uri_query_sas_skv, key.signed_version, /* do_encoding */ true); + + return builder.query(); + } + #pragma endregion #pragma region Queue SAS Helpers @@ -211,7 +279,7 @@ namespace azure { namespace storage { namespace protocol { log_sas_string_to_sign(string_to_sign); - return calculate_hmac_sha256_hash(string_to_sign, credentials); + return calculate_hmac_sha256_hash(string_to_sign, credentials.account_key()); } utility::string_t get_queue_sas_token(const utility::string_t& identifier, const shared_access_policy& policy, const utility::string_t& resource, const storage_credentials& credentials) @@ -253,7 +321,7 @@ namespace azure { namespace storage { namespace protocol { log_sas_string_to_sign(string_to_sign); - return calculate_hmac_sha256_hash(string_to_sign, credentials); + return calculate_hmac_sha256_hash(string_to_sign, credentials.account_key()); } utility::string_t get_table_sas_token(const utility::string_t& identifier, const shared_access_policy& policy, const utility::string_t& table_name, const utility::string_t& start_partition_key, const utility::string_t& start_row_key, const utility::string_t& end_partition_key, const utility::string_t& end_row_key, const utility::string_t& resource, const storage_credentials& credentials) @@ -303,7 +371,7 @@ namespace azure { namespace storage { namespace protocol { log_sas_string_to_sign(string_to_sign); - return calculate_hmac_sha256_hash(string_to_sign, credentials); + return calculate_hmac_sha256_hash(string_to_sign, credentials.account_key()); } utility::string_t get_file_sas_token(const utility::string_t& identifier, const shared_access_policy& policy, const cloud_file_shared_access_headers& headers, const utility::string_t& resource_type, const utility::string_t& resource, const storage_credentials& credentials) @@ -347,15 +415,15 @@ namespace azure { namespace storage { namespace protocol { string_to_sign.append(policy.permissions_to_string()).append(_XPLATSTR("\n")); string_to_sign.append(policy.service_types_to_string()).append(_XPLATSTR("\n")); string_to_sign.append(policy.resource_types_to_string()).append(_XPLATSTR("\n")); - string_to_sign.append(convert_datetime_if_initialized(policy.start())).append(_XPLATSTR("\n")); - string_to_sign.append(convert_datetime_if_initialized(policy.expiry())).append(_XPLATSTR("\n")); + string_to_sign.append(core::convert_to_iso8601_string(policy.start(), 0)).append(_XPLATSTR("\n")); + string_to_sign.append(core::convert_to_iso8601_string(policy.expiry(), 0)).append(_XPLATSTR("\n")); string_to_sign.append(policy.address_or_range().to_string()).append(_XPLATSTR("\n")); string_to_sign.append(policy.protocols_to_string()).append(_XPLATSTR("\n")); string_to_sign.append(header_value_storage_version).append(_XPLATSTR("\n")); log_sas_string_to_sign(string_to_sign); - return calculate_hmac_sha256_hash(string_to_sign, credentials); + return calculate_hmac_sha256_hash(string_to_sign, credentials.account_key()); } utility::string_t get_account_sas_token(const utility::string_t& identifier, const account_shared_access_policy& policy, const storage_credentials& credentials) @@ -373,8 +441,8 @@ namespace azure { namespace storage { namespace protocol { if (policy.is_valid()) { - add_query_if_not_empty(builder, uri_query_sas_start, convert_datetime_if_initialized(policy.start()), /* do_encoding */ true); - add_query_if_not_empty(builder, uri_query_sas_expiry, convert_datetime_if_initialized(policy.expiry()), /* do_encoding */ true); + add_query_if_not_empty(builder, uri_query_sas_start, core::convert_to_iso8601_string(policy.start(), 0), /* do_encoding */ true); + add_query_if_not_empty(builder, uri_query_sas_expiry, core::convert_to_iso8601_string(policy.expiry(), 0), /* do_encoding */ true); add_query_if_not_empty(builder, uri_query_sas_permissions, policy.permissions_to_string(), /* do_encoding */ true); add_query_if_not_empty(builder, uri_query_sas_ip, policy.address_or_range().to_string(), /* do_encoding */ true); add_query_if_not_empty(builder, uri_query_sas_protocol, policy.protocols_to_string(), /* do_encoding */ true); diff --git a/Microsoft.WindowsAzure.Storage/src/streams.cpp b/Microsoft.WindowsAzure.Storage/src/streams.cpp index a86f6614..b252d190 100644 --- a/Microsoft.WindowsAzure.Storage/src/streams.cpp +++ b/Microsoft.WindowsAzure.Storage/src/streams.cpp @@ -44,15 +44,22 @@ namespace azure { namespace storage { namespace core { std::shared_ptr basic_cloud_ostreambuf::prepare_buffer() { - utility::string_t block_md5; + checksum block_checksum; if (m_transaction_hash_provider.is_enabled()) { m_transaction_hash_provider.close(); - block_md5 = m_transaction_hash_provider.hash(); - m_transaction_hash_provider = hash_provider::create_md5_hash_provider(); + block_checksum = m_transaction_hash_provider.hash(); + if (block_checksum.is_md5()) + { + m_transaction_hash_provider = hash_provider::create_md5_hash_provider(); + } + else if (block_checksum.is_crc64()) + { + m_transaction_hash_provider = hash_provider::create_crc64_hash_provider(); + } } - auto buffer = std::make_shared(m_buffer, block_md5); + auto buffer = std::make_shared(m_buffer, block_checksum); m_buffer = concurrency::streams::container_buffer>(); m_buffer_size = m_next_buffer_size; return buffer; @@ -83,7 +90,7 @@ namespace azure { namespace storage { namespace core { auto remaining = count; while (remaining > 0) { - auto write_size = m_buffer_size - m_buffer.in_avail(); + auto write_size = m_buffer_size - static_cast(m_buffer.size()); if (write_size > remaining) { write_size = remaining; diff --git a/Microsoft.WindowsAzure.Storage/src/table_request_factory.cpp b/Microsoft.WindowsAzure.Storage/src/table_request_factory.cpp index 731777ce..c39f5fca 100644 --- a/Microsoft.WindowsAzure.Storage/src/table_request_factory.cpp +++ b/Microsoft.WindowsAzure.Storage/src/table_request_factory.cpp @@ -441,13 +441,12 @@ namespace azure { namespace storage { namespace protocol { return request; } - web::http::http_request execute_batch_operation(Concurrency::streams::stringstreambuf& response_buffer, const cloud_table& table, const table_batch_operation& batch_operation, table_payload_format payload_format, bool is_query, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) + web::http::http_request execute_batch_operation(const cloud_table& table, const table_batch_operation& batch_operation, table_payload_format payload_format, bool is_query, web::http::uri_builder& uri_builder, const std::chrono::seconds& timeout, operation_context context) { utility::string_t batch_boundary_name = core::generate_boundary_name(_XPLATSTR("batch")); utility::string_t changeset_boundary_name = core::generate_boundary_name(_XPLATSTR("changeset")); web::http::http_request request = table_base_request(web::http::methods::POST, uri_builder, timeout, context); - request.set_response_stream(Concurrency::streams::ostream(response_buffer)); web::http::http_headers& request_headers = request.headers(); request_headers.add(web::http::header_names::accept_charset, header_value_charset_utf8); diff --git a/Microsoft.WindowsAzure.Storage/src/table_response_parsers.cpp b/Microsoft.WindowsAzure.Storage/src/table_response_parsers.cpp index 72e14e55..44281cd5 100644 --- a/Microsoft.WindowsAzure.Storage/src/table_response_parsers.cpp +++ b/Microsoft.WindowsAzure.Storage/src/table_response_parsers.cpp @@ -20,6 +20,8 @@ #include "wascore/protocol_json.h" #include "was/common.h" +#include "cpprest/asyncrt_utils.h" + namespace azure { namespace storage { namespace protocol { utility::string_t table_response_parsers::parse_etag(const web::http::http_response& response) @@ -66,12 +68,14 @@ namespace azure { namespace storage { namespace protocol { return token; } - std::vector table_response_parsers::parse_batch_results(const web::http::http_response& response, Concurrency::streams::stringstreambuf& response_buffer, bool is_query, size_t batch_size) + std::vector table_response_parsers::parse_batch_results(const web::http::http_response& response, const concurrency::streams::container_buffer>& response_buffer, bool is_query, size_t batch_size) { std::vector batch_result; batch_result.reserve(batch_size); - std::string& response_body = response_buffer.collection(); + // TODO: We make a copy of the response here, we may optimize it in the future + const std::vector& response_collection = response_buffer.collection(); + const std::string response_body(response_collection.begin(), response_collection.end()); // TODO: Make this Casablanca code more robust @@ -89,7 +93,7 @@ namespace azure { namespace storage { namespace protocol { std::string status_code_string = response_body.substr(status_code_begin, status_code_end - status_code_begin); // Extract the status code as an integer - int status_code = utility::conversions::scan_string(utility::conversions::to_string_t(status_code_string)); + int status_code = utility::conversions::details::scan_string(utility::conversions::to_string_t(status_code_string)); // Acceptable codes are 'Created' and 'NoContent' if (status_code == web::http::status_codes::OK || status_code == web::http::status_codes::Created || status_code == web::http::status_codes::Accepted || status_code == web::http::status_codes::NoContent || status_code == web::http::status_codes::PartialContent || status_code == web::http::status_codes::NotFound) @@ -177,7 +181,7 @@ namespace azure { namespace storage { namespace protocol { std::string status_code_string = response_body.substr(status_code_begin, status_code_end - status_code_begin); // Extract the status code as an integer - int status_code = utility::conversions::scan_string(utility::conversions::to_string_t(status_code_string)); + int status_code = utility::conversions::details::scan_string(utility::conversions::to_string_t(status_code_string)); // Acceptable codes are 'Created' and 'NoContent' if (status_code == web::http::status_codes::OK || status_code == web::http::status_codes::Created || status_code == web::http::status_codes::Accepted || status_code == web::http::status_codes::NoContent || status_code == web::http::status_codes::PartialContent) @@ -226,6 +230,12 @@ namespace azure { namespace storage { namespace protocol { } } + if (batch_result.size() != batch_size) { + std::string str; + str.reserve(128); + str.append(protocol::error_batch_size_not_match_response).append(" Sent ").append(std::to_string(batch_size)).append(" batch operations and received ").append(std::to_string(batch_result.size())).append(" batch results."); + throw storage_exception(str, false); + } return batch_result; } diff --git a/Microsoft.WindowsAzure.Storage/src/timer_handler.cpp b/Microsoft.WindowsAzure.Storage/src/timer_handler.cpp new file mode 100644 index 00000000..f33d1831 --- /dev/null +++ b/Microsoft.WindowsAzure.Storage/src/timer_handler.cpp @@ -0,0 +1,140 @@ +// ----------------------------------------------------------------------------------------- +// +// Copyright 2018 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +// +// ----------------------------------------------------------------------------------------- + +#include "stdafx.h" +#include "wascore/timer_handler.h" + +namespace azure { namespace storage { namespace core { + + timer_handler::timer_handler(const pplx::cancellation_token& token) : + m_cancellation_token(token), m_is_canceled_by_timeout(false), m_timer_started(false) + { + if (m_cancellation_token != pplx::cancellation_token::none()) + { + m_cancellation_token_registration = m_cancellation_token.register_callback([this]() + { + m_worker_cancellation_token_source.cancel(); + stop_timer(); + }); + } + } + + timer_handler::~timer_handler() + { + if (m_cancellation_token != pplx::cancellation_token::none()) + { + m_cancellation_token.deregister_callback(m_cancellation_token_registration); + } + + stop_timer(); + } + + void timer_handler::start_timer(const std::chrono::milliseconds& time) + { + std::lock_guard guard(m_mutex); + if (m_timer_started.load(std::memory_order_acquire)) + { + return; + } + m_timer_started.store(true, std::memory_order_release); + std::weak_ptr weak_this_pointer = shared_from_this(); + m_timeout_task = timeout_after(time).then([weak_this_pointer]() + { + auto this_pointer = weak_this_pointer.lock(); + if (this_pointer) + { + this_pointer->m_is_canceled_by_timeout.store(true, std::memory_order_release); + this_pointer->m_worker_cancellation_token_source.cancel(); + } + }); + } + + void timer_handler::stop_timer() + { + std::lock_guard guard(m_mutex); + if (m_timer_started.load(std::memory_order_acquire) && m_timer) + { +#ifndef _WIN32 + m_timer->cancel(); +#else + m_timer->stop(); +#endif + if (!m_tce._IsTriggered()) + { + // If task_completion_event is not yet triggered, it means timeout has not been triggered. + m_tce._Cancel(); + } + m_timer.reset(); + } + } + +#ifndef _WIN32 + pplx::task timer_handler::timeout_after(const std::chrono::milliseconds& time) + { + m_timer = std::make_shared>(crossplat::threadpool::shared_instance().service()); + m_timer->expires_from_now(std::chrono::duration_cast(time)); + std::weak_ptr weak_this_pointer = shared_from_this(); + auto callback = [weak_this_pointer](const boost::system::error_code& ec) + { + if (ec != boost::asio::error::operation_aborted) + { + auto this_pointer = weak_this_pointer.lock(); + if (this_pointer) + { + std::lock_guard guard(this_pointer->m_mutex); + if (!this_pointer->m_tce._IsTriggered()) + { + this_pointer->m_tce.set(); + } + } + } + }; + m_timer->async_wait(callback); + + auto event_set = pplx::create_task(m_tce); + + return event_set.then([callback]() {}); + } +#else + pplx::task timer_handler::timeout_after(const std::chrono::milliseconds& time) + { + // Initialize the timer and connect the callback with completion event. + m_timer = std::make_shared>(static_cast(time.count()), 0); + std::weak_ptr weak_this_pointer = shared_from_this(); + auto callback = std::make_shared>([weak_this_pointer](int) + { + auto this_pointer = weak_this_pointer.lock(); + if (this_pointer) + { + std::lock_guard guard(this_pointer->m_mutex); + if (!this_pointer->m_tce._IsTriggered()) + { + this_pointer->m_tce.set(); + } + } + }); + m_timer->link_target(callback.get()); // When timer stops, tce will trigger cancellation. + m_timer->start(); + + auto event_set = pplx::create_task(m_tce); + + // Timer and callback should be preserved before event set has been triggered. + return event_set.then([callback]() {}); + } +#endif + +}}} diff --git a/Microsoft.WindowsAzure.Storage/src/util.cpp b/Microsoft.WindowsAzure.Storage/src/util.cpp index e03dec2d..cf0a3116 100644 --- a/Microsoft.WindowsAzure.Storage/src/util.cpp +++ b/Microsoft.WindowsAzure.Storage/src/util.cpp @@ -21,10 +21,6 @@ #include "wascore/constants.h" #include "wascore/resources.h" -#ifndef _WIN32 -#include "pplx/threadpool.h" -#endif - #ifdef _WIN32 #define WIN32_LEAN_AND_MEAN #include @@ -32,6 +28,7 @@ #include #include #else +#include "pplx/threadpool.h" #include #include #endif @@ -81,7 +78,7 @@ namespace azure { namespace storage { namespace core { return std::numeric_limits::max(); } - pplx::task stream_copy_async(concurrency::streams::istream istream, concurrency::streams::ostream ostream, utility::size64_t length, utility::size64_t max_length) + pplx::task stream_copy_async(concurrency::streams::istream istream, concurrency::streams::ostream ostream, utility::size64_t length, utility::size64_t max_length, const pplx::cancellation_token& cancellation_token, std::shared_ptr timer_handler) { size_t buffer_size(protocol::default_buffer_size); utility::size64_t istream_length = length == std::numeric_limits::max() ? get_remaining_stream_length(istream) : length; @@ -98,7 +95,7 @@ namespace azure { namespace storage { namespace core { auto obuffer = ostream.streambuf(); auto length_ptr = (length != std::numeric_limits::max()) ? std::make_shared(length) : nullptr; auto total_ptr = std::make_shared(0); - return pplx::details::do_while([istream, obuffer, buffer_size, length_ptr, total_ptr, max_length] () -> pplx::task + return pplx::details::_do_while([istream, obuffer, buffer_size, length_ptr, total_ptr, max_length, cancellation_token, timer_handler] () -> pplx::task { size_t read_length = buffer_size; if ((length_ptr != nullptr) && (*length_ptr < read_length)) @@ -106,6 +103,13 @@ namespace azure { namespace storage { namespace core { read_length = static_cast(*length_ptr); } + // need to cancel the potentially heavy read/write operation if cancellation token is canceled. + if (cancellation_token.is_canceled()) + { + assert_timed_out_by_timer(timer_handler); + throw storage_exception(protocol::error_operation_canceled); + } + return istream.read(obuffer, read_length).then([length_ptr, total_ptr, max_length] (size_t count) -> bool { *total_ptr += count; @@ -168,6 +172,21 @@ namespace azure { namespace storage { namespace core { return true; } + bool has_whitespace_or_empty(const utility::string_t& value) + { + if (value.empty()) return true; + + for (utility::string_t::const_iterator it = value.cbegin(); it != value.cend(); ++it) + { + if (isspace(*it)) + { + return true; + } + } + + return false; + } + utility::string_t single_quote(const utility::string_t& value) { const utility::char_t SINGLE_QUOTE = _XPLATSTR('\''); @@ -245,13 +264,6 @@ namespace azure { namespace storage { namespace core { return true; } - utility::datetime truncate_fractional_seconds(utility::datetime value) - { - utility::datetime result; - result = result + (value.to_interval() / second_interval * second_interval); - return result; - } - utility::string_t convert_to_string(double value) { utility::ostringstream_t buffer; @@ -275,97 +287,64 @@ namespace azure { namespace storage { namespace core { return result; } - utility::string_t convert_to_string_with_fixed_length_fractional_seconds(utility::datetime value) + utility::string_t convert_to_string(const utility::string_t& source) { - // TODO: Remove this function if Casablanca changes their datetime serialization to not trim trailing zeros in the fractional seconds component of a time - -#ifdef _WIN32 - int status; - - ULARGE_INTEGER largeInt; - largeInt.QuadPart = value.to_interval(); - - FILETIME ft; - ft.dwHighDateTime = largeInt.HighPart; - ft.dwLowDateTime = largeInt.LowPart; + return source; + } - SYSTEMTIME systemTime; - if (!FileTimeToSystemTime((const FILETIME *)&ft, &systemTime)) + utility::string_t convert_to_iso8601_string(const utility::datetime& value, int num_decimal_digits) + { + if (!value.is_initialized()) { - throw utility::details::create_system_error(GetLastError()); + return utility::string_t(); } - std::wostringstream outStream; + utility::string_t time_str = value.to_string(utility::datetime::ISO_8601); + auto second_end = time_str.find_last_of(_XPLATSTR(':')) + 3; + auto z_pos = time_str.find_last_of(_XPLATSTR('Z')); - const size_t buffSize = 64; -#if _WIN32_WINNT < _WIN32_WINNT_VISTA - TCHAR dateStr[buffSize] = { 0 }; - status = GetDateFormat(LOCALE_INVARIANT, 0, &systemTime, "yyyy-MM-dd", dateStr, buffSize); -#else - wchar_t dateStr[buffSize] = { 0 }; - status = GetDateFormatEx(LOCALE_NAME_INVARIANT, 0, &systemTime, L"yyyy-MM-dd", dateStr, buffSize, NULL); -#endif // _WIN32_WINNT < _WIN32_WINNT_VISTA - if (status == 0) + if (second_end == utility::string_t::npos || z_pos < second_end) { - throw utility::details::create_system_error(GetLastError()); + throw std::logic_error("Invalid date and time format."); } -#if _WIN32_WINNT < _WIN32_WINNT_VISTA - TCHAR timeStr[buffSize] = { 0 }; - status = GetTimeFormat(LOCALE_INVARIANT, TIME_NOTIMEMARKER | TIME_FORCE24HOURFORMAT, &systemTime, "HH':'mm':'ss", timeStr, buffSize); -#else - wchar_t timeStr[buffSize] = { 0 }; - status = GetTimeFormatEx(LOCALE_NAME_INVARIANT, TIME_NOTIMEMARKER | TIME_FORCE24HOURFORMAT, &systemTime, L"HH':'mm':'ss", timeStr, buffSize); -#endif // _WIN32_WINNT < _WIN32_WINNT_VISTA - if (status == 0) - { - throw utility::details::create_system_error(GetLastError()); - } + utility::string_t integral_part = time_str.substr(0, second_end); + utility::string_t fractional_part = time_str.substr(second_end, z_pos - second_end); + utility::string_t suffix = time_str.substr(z_pos); - outStream << dateStr << "T" << timeStr; - uint64_t frac_sec = largeInt.QuadPart % second_interval; - if (frac_sec > 0) + if (num_decimal_digits == 0) { - // Append fractional second, which is a 7-digit value - // This way, '1200' becomes '0001200' - char buf[9] = { 0 }; - sprintf_s(buf, sizeof(buf), ".%07ld", (long int)frac_sec); - outStream << buf; - } - outStream << "Z"; - - return outStream.str(); -#else //LINUX - uint64_t input = value.to_interval(); - uint64_t frac_sec = input % second_interval; - input /= second_interval; // convert to seconds - time_t time = (time_t)input - (time_t)11644473600LL;// diff between windows and unix epochs (seconds) - - struct tm datetime; - gmtime_r(&time, &datetime); - - const int max_dt_length = 64; - char output[max_dt_length + 1] = { 0 }; - - if (frac_sec > 0) - { - // Append fractional second, which is a 7-digit value - // This way, '1200' becomes '0001200' - char buf[9] = { 0 }; - snprintf(buf, sizeof(buf), ".%07ld", (long int)frac_sec); - // format the datetime into a separate buffer - char datetime_str[max_dt_length + 1] = { 0 }; - strftime(datetime_str, sizeof(datetime_str), "%Y-%m-%dT%H:%M:%S", &datetime); - // now print this buffer into the output buffer - snprintf(output, sizeof(output), "%s%sZ", datetime_str, buf); + return integral_part + suffix; } else { - strftime(output, sizeof(output), "%Y-%m-%dT%H:%M:%SZ", &datetime); + if (fractional_part.empty()) + { + fractional_part += _XPLATSTR('.'); + } + fractional_part = fractional_part.substr(0, 1 + num_decimal_digits); + int padding_length = num_decimal_digits - (static_cast(fractional_part.length()) - 1); + if (padding_length > 0) + { + fractional_part += utility::string_t(padding_length, _XPLATSTR('0')); + } + return integral_part + fractional_part + suffix; } + } - return std::string(output); -#endif + utility::string_t str_trim_starting_trailing_whitespaces(const utility::string_t& str) + { + auto non_space_begin = std::find_if(str.begin(), str.end(), std::not1(std::ptr_fun(isspace))); + auto non_space_end = std::find_if(str.rbegin(), str.rend(), std::not1(std::ptr_fun(isspace))).base(); + return utility::string_t(non_space_begin, non_space_end); + } + + void assert_timed_out_by_timer(std::shared_ptr timer_handler) + { + if (timer_handler != nullptr && timer_handler->is_canceled_by_timeout()) + { + throw storage_exception(protocol::error_client_timeout, false); + } } #ifdef _WIN32 @@ -377,7 +356,8 @@ namespace azure { namespace storage { namespace core { public: #ifdef _WIN32 delay_event(std::chrono::milliseconds timeout) - : m_callback(new concurrency::call(std::function(std::bind(&delay_event::timer_fired, this, std::placeholders::_1)))), m_timer(static_cast(timeout.count()), 0, m_callback, false) + : m_callback(new concurrency::call(std::function(std::bind(&delay_event::timer_fired, this, std::placeholders::_1)))), m_timer(static_cast(timeout.count()), 0, m_callback, false), + m_timeout(timeout) { } @@ -388,7 +368,18 @@ namespace azure { namespace storage { namespace core { void start() { - m_timer.start(); + const auto& ambient_delayed_scheduler = get_wastorage_ambient_delayed_scheduler(); + if (ambient_delayed_scheduler) + { + ambient_delayed_scheduler->schedule_after( + [](void* event) { reinterpret_cast(event)->timer_fired(0); }, + this, + m_timeout.count()); + } + else + { + m_timer.start(); + } } #else delay_event(std::chrono::milliseconds timeout) @@ -411,6 +402,7 @@ namespace azure { namespace storage { namespace core { #ifdef _WIN32 concurrency::call* m_callback; concurrency::timer m_timer; + std::chrono::milliseconds m_timeout; #else boost::asio::deadline_timer m_timer; #endif @@ -484,10 +476,17 @@ namespace azure { namespace storage { namespace core { { key.append(_XPLATSTR("1#")); } - key.append(utility::conversions::print_string(config.timeout().count())); + key.append(core::convert_to_string(config.timeout().count())); key.append(_XPLATSTR("#")); - key.append(utility::conversions::print_string(config.chunksize())); + key.append(core::convert_to_string(config.chunksize())); key.append(_XPLATSTR("#")); + if (config.get_ssl_context_callback() != nullptr) + { + char buf[16]; + sprintf(buf, "%p", (const void*)&(config.get_ssl_context_callback())); + key.append(buf); + key.append(_XPLATSTR("#")); + } std::lock_guard guard(s_mutex); auto iter = s_http_clients.find(key); @@ -502,6 +501,7 @@ namespace azure { namespace storage { namespace core { return iter->second; } } + #endif }}} // namespace azure::storage::core diff --git a/Microsoft.WindowsAzure.Storage/src/xml_wrapper.cpp b/Microsoft.WindowsAzure.Storage/src/xml_wrapper.cpp new file mode 100644 index 00000000..689f0cd8 --- /dev/null +++ b/Microsoft.WindowsAzure.Storage/src/xml_wrapper.cpp @@ -0,0 +1,314 @@ +/*** +* ==++== +* +* Copyright (c) Microsoft Corporation. All rights reserved. +* Licensed under the Apache License, Version 2.0 (the "License"); +* you may not use this file except in compliance with the License. +* You may obtain a copy of the License at +* http://www.apache.org/licenses/LICENSE-2.0 +* +* Unless required by applicable law or agreed to in writing, software +* distributed under the License is distributed on an "AS IS" BASIS, +* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +* See the License for the specific language governing permissions and +* limitations under the License. +* +* ==--== +* =+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ +* +* xml_wrapper.cpp +* +* This file contains wrapper for libxml2 +* +* =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- +****/ + +#include "stdafx.h" +#include "wascore/xml_wrapper.h" + +#ifndef _WIN32 +namespace azure { namespace storage { namespace core { namespace xml { + +std::string xml_char_to_string(const xmlChar * xml_char) +{ + return std::string(reinterpret_cast(xml_char)); +} + +xml_text_reader_wrapper::xml_text_reader_wrapper(const unsigned char * buffer, unsigned int size) +{ + m_reader = xmlReaderForMemory((const char*)buffer, size, NULL, 0, 0); +} + +xml_text_reader_wrapper::~xml_text_reader_wrapper() +{ + if (m_reader != nullptr) + { + xmlFreeTextReader(m_reader); + m_reader = nullptr; + } +} + +bool xml_text_reader_wrapper::read() +{ + return xmlTextReaderRead(m_reader) == 1; +} + +unsigned xml_text_reader_wrapper::get_node_type() +{ + return xmlTextReaderNodeType(m_reader); +} + +bool xml_text_reader_wrapper::is_empty_element() +{ + return xmlTextReaderIsEmptyElement(m_reader) == 1; +} + +std::string xml_text_reader_wrapper::get_local_name() +{ + auto xml_char = xmlTextReaderLocalName(m_reader); + std::string result; + + if (xml_char != nullptr) + { + result = xml_char_to_string(xml_char); + xmlFree(xml_char); + } + + return result; +} + +std::string xml_text_reader_wrapper::get_value() +{ + auto xml_char = xmlTextReaderValue(m_reader); + std::string result; + + if (xml_char != nullptr) + { + result = xml_char_to_string(xml_char); + xmlFree(xml_char); + } + return result; +} + +bool xml_text_reader_wrapper::move_to_first_attribute() +{ + return xmlTextReaderMoveToFirstAttribute(m_reader) == 1; +} + +bool xml_text_reader_wrapper::move_to_next_attribute() +{ + return xmlTextReaderMoveToNextAttribute(m_reader) == 1; +} + +xml_element_wrapper::~xml_element_wrapper() +{ +} + +xml_element_wrapper::xml_element_wrapper(xmlNode * node) +{ + m_ele = node; + m_ele->_private = this; +} + +xml_element_wrapper * xml_element_wrapper::add_child(const std::string & name, const std::string & prefix) +{ + xmlNs* ns = nullptr; + xmlNode* child = nullptr; + + if (m_ele->type != XML_ELEMENT_NODE) + { + return nullptr; + } + + if (!prefix.empty()) + { + ns = xmlSearchNs(m_ele->doc, m_ele, (const xmlChar*)prefix.c_str()); + if (!ns) + { + return nullptr; + } + } + + child = xmlNewNode(ns, (const xmlChar*)name.c_str()); //mem leak? + + if (!child) + return nullptr; + + xmlNode* node = xmlAddChild(m_ele, child); + if (!node) + return nullptr; + + node->_private = new xml_element_wrapper(node); + return reinterpret_cast(node->_private); +} + +void xml_element_wrapper::set_namespace_declaration(const std::string & uri, const std::string & prefix) +{ + xmlNewNs(m_ele, (const xmlChar*)(uri.empty() ? nullptr : uri.c_str()), + (const xmlChar*)(prefix.empty() ? nullptr : prefix.c_str())); +} + +void xml_element_wrapper::set_namespace(const std::string & prefix) +{ + xmlNs* ns = xmlSearchNs(m_ele->doc, m_ele, (const xmlChar*)(prefix.empty() ? nullptr : prefix.c_str())); + if (ns) + { + xmlSetNs(m_ele, ns); + } +} + +void xml_element_wrapper::set_attribute(const std::string & name, const std::string & value, const std::string & prefix) +{ + xmlAttr* attr = 0; + + if (prefix.empty()) + { + attr = xmlSetProp(m_ele, (const xmlChar*)name.c_str(), (const xmlChar*)value.c_str()); + } + else + { + xmlNs* ns = xmlSearchNs(m_ele->doc, m_ele, (const xmlChar*)prefix.c_str()); + if (ns) + { + attr = xmlSetNsProp(m_ele, ns, (const xmlChar*)name.c_str(), + (const xmlChar*)value.c_str()); + } + else + { + return; + } + } + + if (attr) + { + attr->_private = new xml_element_wrapper(reinterpret_cast(attr)); + } +} + +void xml_element_wrapper::set_child_text(const std::string & text) +{ + xml_element_wrapper* node = nullptr; + + for (xmlNode* child = m_ele->children; child; child = child->next) + if (child->type == xmlElementType::XML_TEXT_NODE) + { + child->_private = new xml_element_wrapper(child); + node = reinterpret_cast(child->_private); + } + + if (node) + { + if (node->m_ele->type != xmlElementType::XML_ELEMENT_NODE) + { + xmlNodeSetContent(node->m_ele, (const xmlChar*)text.c_str()); + } + } + else { + if (m_ele->type == XML_ELEMENT_NODE) + { + xmlNode* node = xmlNewText((const xmlChar*)text.c_str()); + + node = xmlAddChild(m_ele, node); + + node->_private = new xml_element_wrapper(node); + } + } +} + +void xml_element_wrapper::free_wrappers(xmlNode * node) +{ + if (!node) + return; + + for (xmlNode* child = node->children; child; child = child->next) + free_wrappers(child); + + switch (node->type) + { + case XML_DTD_NODE: + case XML_ELEMENT_DECL: + case XML_ATTRIBUTE_NODE: + case XML_ATTRIBUTE_DECL: + case XML_ENTITY_DECL: + if (node->_private) + { + delete reinterpret_cast(node->_private); + node->_private = nullptr; + } + break; + case XML_DOCUMENT_NODE: + break; + default: + if (node->_private) + { + delete reinterpret_cast(node->_private); + node->_private = nullptr; + } + break; + } +} + +xml_document_wrapper::xml_document_wrapper() +{ + m_doc = xmlNewDoc(reinterpret_cast("1.0")); +} + +xml_document_wrapper::~xml_document_wrapper() +{ + xml_element_wrapper::free_wrappers(reinterpret_cast(m_doc)); + xmlFreeDoc(m_doc); + m_doc = nullptr; +} + +std::string xml_document_wrapper::write_to_string() +{ + xmlIndentTreeOutput = 0; + xmlChar* buffer = 0; + int size = 0; + + xmlDocDumpFormatMemoryEnc(m_doc, &buffer, &size, 0, 0); + + std::string result; + + if (buffer) + { + result = std::string(reinterpret_cast(buffer), reinterpret_cast(buffer + size)); + + xmlFree(buffer); + } + return result; +} + +xml_element_wrapper* xml_document_wrapper::create_root_node(const std::string & name, const std::string & namespace_name, const std::string & prefix) +{ + xmlNode* node = xmlNewDocNode(m_doc, 0, (const xmlChar*)name.c_str(), 0); + xmlDocSetRootElement(m_doc, node); + + xml_element_wrapper* element = get_root_node(); + + if (!namespace_name.empty()) + { + element->set_namespace_declaration(namespace_name, prefix); + element->set_namespace(prefix); + } + + return element; +} + +xml_element_wrapper* xml_document_wrapper::get_root_node() const +{ + xmlNode* root = xmlDocGetRootElement(m_doc); + if (root == NULL) + return NULL; + else + { + root->_private = new xml_element_wrapper(root); + return reinterpret_cast(root->_private); + } + + return nullptr; +} + +}}}} // namespace azure::storage::core::xml + +#endif //#ifdef _WIN32 diff --git a/Microsoft.WindowsAzure.Storage/src/xmlhelpers.cpp b/Microsoft.WindowsAzure.Storage/src/xmlhelpers.cpp index 1d6b43e6..bcb2488e 100644 --- a/Microsoft.WindowsAzure.Storage/src/xmlhelpers.cpp +++ b/Microsoft.WindowsAzure.Storage/src/xmlhelpers.cpp @@ -46,9 +46,10 @@ #include "wascore/xmlstream.h" #else typedef int XmlNodeType; -#define XmlNodeType_Element xmlpp::TextReader::xmlNodeType::Element -#define XmlNodeType_Text xmlpp::TextReader::xmlNodeType::Text -#define XmlNodeType_EndElement xmlpp::TextReader::xmlNodeType::EndElement +#define XmlNodeType_Element xmlElementType::XML_ELEMENT_NODE +#define XmlNodeType_Text xmlElementType::XML_TEXT_NODE +#define XmlNodeType_EndElement xmlElementType::XML_ELEMENT_DECL +#define XmlNodeType_Whitespace xmlElementType::XML_DTD_NODE //? not align with tinyxml? #endif using namespace web; @@ -101,13 +102,13 @@ namespace azure { namespace storage { namespace core { namespace xml { if (m_data.empty()) m_reader.reset(); else - m_reader.reset(new xmlpp::TextReader(reinterpret_cast(m_data.data()), static_cast(m_data.size()))); + m_reader.reset(new xml_text_reader_wrapper(reinterpret_cast(m_data.data()), static_cast(m_data.size()))); #endif } - bool xml_reader::parse() + xml_reader::parse_result xml_reader::parse() { - if (m_streamDone) return false; + if (m_streamDone) return xml_reader::parse_result::cannot_continue; // Set this to true each time the parse routine is invoked. Most derived readers will only invoke parse once. m_continueParsing = true; @@ -119,7 +120,7 @@ namespace azure { namespace storage { namespace core { namespace xml { { #else if (m_reader == nullptr) - return !m_continueParsing; // no XML document to read + return xml_reader::parse_result::cannot_continue; // no XML document to read while (m_continueParsing && m_reader->read()) { @@ -147,7 +148,10 @@ namespace azure { namespace storage { namespace core { namespace xml { break; case XmlNodeType_Text: - handle_element(m_elementStack.back()); + case XmlNodeType_Whitespace: + if (m_elementStack.size()) { + handle_element(m_elementStack.back()); + } break; case XmlNodeType_EndElement: @@ -160,12 +164,24 @@ namespace azure { namespace storage { namespace core { namespace xml { } } + xml_reader::parse_result result = xml_reader::parse_result::can_continue; // If the loop was terminated because there was no more to read from the stream, set m_streamDone to true, so exit early // the next time parse is invoked. - if (m_continueParsing) m_streamDone = true; - // Return false if the end of the stream was reached and true if parsing was paused. The return value indicates whether - // parsing can be resumed. - return !m_continueParsing; + // if stream is not done, it means that the parsing is interuptted by pause(). + // if the element stack is not empty when the stream is done, it means that the xml is not complete. + if (m_continueParsing) + { + m_streamDone = true; + if (m_elementStack.empty()) + { + result = xml_reader::parse_result::cannot_continue; + } + else + { + result = xml_reader::parse_result::xml_not_complete; + } + } + return result; } utility::string_t xml_reader::get_parent_element_name(size_t pos) @@ -199,7 +215,7 @@ namespace azure { namespace storage { namespace core { namespace xml { } return utility::string_t(pwszLocalName); #else - return utility::string_t(m_reader->get_local_name().raw()); + return utility::string_t(m_reader->get_local_name()); #endif } @@ -236,7 +252,7 @@ namespace azure { namespace storage { namespace core { namespace xml { return utility::string_t(pwszValue); #else - return utility::string_t(m_reader->get_value().raw()); + return utility::string_t(m_reader->get_value()); #endif } @@ -314,8 +330,8 @@ namespace azure { namespace storage { namespace core { namespace xml { throw utility::details::create_system_error(error); } #else // LINUX - m_document.reset(new xmlpp::Document()); - m_elementStack = std::stack(); + m_document.reset(new xml_document_wrapper()); + m_elementStack = std::stack(); m_stream = &stream; #endif } @@ -339,7 +355,8 @@ namespace azure { namespace storage { namespace core { namespace xml { } #else // LINUX auto result = m_document->write_to_string(); - *m_stream << reinterpret_cast(result.c_str()); + if (m_stream != nullptr) + *m_stream << reinterpret_cast(result.c_str()); #endif } diff --git a/Microsoft.WindowsAzure.Storage/tests/CMakeLists.txt b/Microsoft.WindowsAzure.Storage/tests/CMakeLists.txt index 3b12bd44..c223dc4c 100644 --- a/Microsoft.WindowsAzure.Storage/tests/CMakeLists.txt +++ b/Microsoft.WindowsAzure.Storage/tests/CMakeLists.txt @@ -3,6 +3,7 @@ include_directories(../includes ${AZURESTORAGE_INCLUDE_DIRS} ${UnitTest++_INCLUD # THE ORDER OF FILES IS VERY /VERY/ IMPORTANT if(UNIX) set(SOURCES + timer_handler_test.cpp blob_lease_test.cpp blob_streams_test.cpp blob_test_base.cpp diff --git a/Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v140.vcxproj b/Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v140.vcxproj index f879ebfc..bad2f9ae 100644 --- a/Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v140.vcxproj +++ b/Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v140.vcxproj @@ -5,10 +5,18 @@ Debug Win32 + + Debug + x64 + Release Win32 + + Release + x64 + {BC8759CC-C115-4E27-9545-D25E2CDA9412} @@ -22,6 +30,12 @@ v140 Unicode + + Application + true + v140 + Unicode + Application false @@ -29,15 +43,28 @@ true Unicode + + Application + false + v140 + true + Unicode + + + + + + + true @@ -45,12 +72,24 @@ $(PlatformToolset)\$(Platform)\$(Configuration)\ wastoretest + + true + $(ProjectDir)..\$(PlatformToolset)\$(Platform)\$(Configuration)\ + $(PlatformToolset)\$(Platform)\$(Configuration)\ + wastoretest + false $(ProjectDir)..\$(PlatformToolset)\$(Platform)\$(Configuration)\ $(PlatformToolset)\$(Platform)\$(Configuration)\ wastoretest + + false + $(ProjectDir)..\$(PlatformToolset)\$(Platform)\$(Configuration)\ + $(PlatformToolset)\$(Platform)\$(Configuration)\ + wastoretest + Use @@ -59,7 +98,24 @@ false WIN32;_DEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) true - ..\includes;..\tests\UnitTest++\src;%(AdditionalIncludeDirectories) + ..\includes;$(VcpkgCurrentInstalledDir)\include\UnitTest++;%(AdditionalIncludeDirectories) + true + + + Console + true + bcrypt.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies) + + + + + Use + Level4 + Disabled + false + WIN32;_DEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) + true + ..\includes;$(VcpkgCurrentInstalledDir)\include\UnitTest++;%(AdditionalIncludeDirectories) true @@ -77,7 +133,27 @@ true WIN32;NDEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) true - ..\includes;..\tests\UnitTest++\src;%(AdditionalIncludeDirectories) + ..\includes;$(VcpkgCurrentInstalledDir)\include\UnitTest++;%(AdditionalIncludeDirectories) + true + + + Console + true + true + true + bcrypt.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies) + + + + + Level3 + Use + MaxSpeed + true + true + WIN32;NDEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) + true + ..\includes;$(VcpkgCurrentInstalledDir)\include\UnitTest++;%(AdditionalIncludeDirectories) true @@ -103,6 +179,7 @@ + @@ -129,7 +206,9 @@ Create + Create Create + Create @@ -145,12 +224,8 @@ true false - - {64a4fefe-0461-4e95-8cc1-91ef5f57dbc6} - - @@ -161,12 +236,5 @@ - - - - This project references NuGet package(s) that are missing on this computer. Use NuGet Package Restore to download them. For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}. - - - diff --git a/Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v140.vcxproj.filters b/Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v140.vcxproj.filters index 76c7247a..29d86d30 100644 --- a/Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v140.vcxproj.filters +++ b/Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v140.vcxproj.filters @@ -140,6 +140,9 @@ Source Files + + Source Files + diff --git a/Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v120.vcxproj b/Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v141.vcxproj similarity index 59% rename from Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v120.vcxproj rename to Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v141.vcxproj index 6cf3c7f9..a119ea3e 100644 --- a/Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v120.vcxproj +++ b/Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v141.vcxproj @@ -1,17 +1,25 @@  - + Debug Win32 + + Debug + x64 + Release Win32 + + Release + x64 + - {D02FBA48-23CD-4CDF-9BD0-A03295744FA2} + {BC8759CC-C115-4E27-9545-D25E2CDA9412} Win32Proj MicrosoftWindowsAzureStorageUnitTests @@ -19,13 +27,26 @@ Application true - v120 + v141 + Unicode + + + Application + true + v141 Unicode Application false - v120 + v141 + true + Unicode + + + Application + false + v141 true Unicode @@ -35,9 +56,15 @@ + + + + + + true @@ -45,12 +72,24 @@ $(PlatformToolset)\$(Platform)\$(Configuration)\ wastoretest + + true + $(ProjectDir)..\$(PlatformToolset)\$(Platform)\$(Configuration)\ + $(PlatformToolset)\$(Platform)\$(Configuration)\ + wastoretest + false $(ProjectDir)..\$(PlatformToolset)\$(Platform)\$(Configuration)\ $(PlatformToolset)\$(Platform)\$(Configuration)\ wastoretest + + false + $(ProjectDir)..\$(PlatformToolset)\$(Platform)\$(Configuration)\ + $(PlatformToolset)\$(Platform)\$(Configuration)\ + wastoretest + Use @@ -59,7 +98,24 @@ false WIN32;_DEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) true - ..\includes;..\tests\UnitTest++\src;%(AdditionalIncludeDirectories) + ..\includes;$(VcpkgCurrentInstalledDir)\include\UnitTest++;%(AdditionalIncludeDirectories) + true + + + Console + true + bcrypt.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies) + + + + + Use + Level4 + Disabled + false + WIN32;_DEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) + true + ..\includes;$(VcpkgCurrentInstalledDir)\include\UnitTest++;%(AdditionalIncludeDirectories) true @@ -77,7 +133,27 @@ true WIN32;NDEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) true - ..\includes;..\tests\UnitTest++\src;%(AdditionalIncludeDirectories) + ..\includes;$(VcpkgCurrentInstalledDir)\include\UnitTest++;%(AdditionalIncludeDirectories) + true + + + Console + true + true + true + bcrypt.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies) + + + + + Level3 + Use + MaxSpeed + true + true + WIN32;NDEBUG;_CONSOLE;_TURN_OFF_PLATFORM_STRING;%(PreprocessorDefinitions) + true + ..\includes;$(VcpkgCurrentInstalledDir)\include\UnitTest++;%(AdditionalIncludeDirectories) true @@ -103,6 +179,7 @@ + @@ -129,7 +206,9 @@ Create + Create Create + Create @@ -137,20 +216,16 @@ - - {DCFF75B0-B142-4EC8-992F-3E48F2E3EECE} + + {25D342C3-6CDA-44DD-A16A-32A19B692785} true true false true false - - {64a4fefe-0461-4e95-8cc1-91ef5f57dbc6} - - @@ -161,12 +236,5 @@ - - - - This project references NuGet package(s) that are missing on this computer. Use NuGet Package Restore to download them. For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}. - - - diff --git a/Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v120.vcxproj.filters b/Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v141.vcxproj.filters similarity index 98% rename from Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v120.vcxproj.filters rename to Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v141.vcxproj.filters index 26bd3b1f..29d86d30 100644 --- a/Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v120.vcxproj.filters +++ b/Microsoft.WindowsAzure.Storage/tests/Microsoft.WindowsAzure.Storage.UnitTests.v141.vcxproj.filters @@ -131,15 +131,18 @@ Source Files - + Source Files - + Source Files Source Files + + Source Files + diff --git a/Microsoft.WindowsAzure.Storage/tests/README.md b/Microsoft.WindowsAzure.Storage/tests/README.md index 3233091d..ad82528e 100644 --- a/Microsoft.WindowsAzure.Storage/tests/README.md +++ b/Microsoft.WindowsAzure.Storage/tests/README.md @@ -1,7 +1,11 @@ # Unit Tests for Azure Storage Client Library for C++ ## Prerequisites -Please download [UnitTest++](https://github.com/unittest-cpp/unittest-cpp/tree/sourceforge) and place it into a subfolder named UnitTest++ under this folder. Then add both UnitTest++ project(UnitTest++.vsnet2005.vcproj) and the Microsoft.WindowsAzure.Storage.UnitTests project to the solution to get unit tests working. +Run following commands under root of this repository to get UnitTest++. +```bash +git submodule init +git submodule update +``` ## Running the tests diff --git a/Microsoft.WindowsAzure.Storage/tests/blob_lease_test.cpp b/Microsoft.WindowsAzure.Storage/tests/blob_lease_test.cpp index cfcbf7b5..768b5a93 100644 --- a/Microsoft.WindowsAzure.Storage/tests/blob_lease_test.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/blob_lease_test.cpp @@ -69,27 +69,29 @@ void container_test_base::check_lease_access(azure::storage::cloud_blob_containe if (locked) { CHECK_THROW(container.delete_container(empty_condition, azure::storage::blob_request_options(), m_context), azure::storage::storage_exception); - } - else - { - if (allow_delete) + + if (fake) { - container.delete_container(empty_condition, azure::storage::blob_request_options(), m_context); + CHECK_THROW(container.delete_container(lease_condition, azure::storage::blob_request_options(), m_context), azure::storage::storage_exception); + CHECK_THROW(container.download_attributes(lease_condition, azure::storage::blob_request_options(), m_context), azure::storage::storage_exception); } - } - - if (locked && !fake) - { - container.download_attributes(lease_condition, azure::storage::blob_request_options(), m_context); - if (allow_delete) + else { - container.delete_container(lease_condition, azure::storage::blob_request_options(), m_context); + container.download_attributes(lease_condition, azure::storage::blob_request_options(), m_context); + if (allow_delete) + { + container.delete_container(lease_condition, azure::storage::blob_request_options(), m_context); + } } } else { CHECK_THROW(container.delete_container(lease_condition, azure::storage::blob_request_options(), m_context), azure::storage::storage_exception); CHECK_THROW(container.download_attributes(lease_condition, azure::storage::blob_request_options(), m_context), azure::storage::storage_exception); + if (allow_delete) + { + container.delete_container(empty_condition, azure::storage::blob_request_options(), m_context); + } } } diff --git a/Microsoft.WindowsAzure.Storage/tests/blob_streams_test.cpp b/Microsoft.WindowsAzure.Storage/tests/blob_streams_test.cpp index 56c5c370..c92a931e 100644 --- a/Microsoft.WindowsAzure.Storage/tests/blob_streams_test.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/blob_streams_test.cpp @@ -18,6 +18,7 @@ #include "stdafx.h" #include "blob_test_base.h" #include "check_macros.h" +#include "wascore/hashing.h" size_t seek_read_and_compare(concurrency::streams::istream stream, std::vector buffer_to_compare, utility::size64_t offset, size_t count, size_t expected_read_count) { @@ -178,6 +179,54 @@ void seek_and_write_putn(concurrency::streams::ostream stream, const std::vector SUITE(Blob) { + TEST_FIXTURE(block_blob_test_base, blob_write_stream_upload_and_download) + { + auto provider1 = azure::storage::core::hash_provider::create_md5_hash_provider(); + auto provider2 = azure::storage::core::hash_provider::create_crc64_hash_provider(); + + std::vector buffer; + size_t buffersize = 3 * 1024 * 1024; + buffer.resize(buffersize); + + auto wstream = m_blob.open_write(); + + // write 3 * 3MB to the blob stream + for (int i = 0; i < 3; ++i) + { + fill_buffer(buffer); + provider1.write(buffer.data(), buffersize); + provider2.write(buffer.data(), buffersize); + concurrency::streams::container_buffer> input_buffer(buffer); + wstream.write(input_buffer, buffersize); + } + + provider1.close(); + provider2.close(); + CHECK(provider1.hash().is_md5()); + auto origin_md5 = provider1.hash().md5(); + CHECK(provider2.hash().is_crc64()); + auto origin_crc64 = provider2.hash().crc64(); + + wstream.flush().wait(); + wstream.close().wait(); + + azure::storage::blob_request_options options; + concurrency::streams::container_buffer> output_buffer; + m_blob.download_to_stream(output_buffer.create_ostream(), azure::storage::access_condition(), options, m_context); + + provider1 = azure::storage::core::hash_provider::create_md5_hash_provider(); + provider1.write(output_buffer.collection().data(), (size_t)(output_buffer.size())); + provider1.close(); + provider2 = azure::storage::core::hash_provider::create_crc64_hash_provider(); + provider2.write(output_buffer.collection().data(), (size_t)(output_buffer.size())); + provider2.close(); + + auto downloaded_md5 = provider1.hash().md5(); + CHECK_UTF8_EQUAL(origin_md5, downloaded_md5); + auto downloaded_crc64 = provider2.hash().crc64(); + CHECK_UTF8_EQUAL(origin_crc64, downloaded_crc64); + } + TEST_FIXTURE(block_blob_test_base, blob_read_stream_download) { azure::storage::blob_request_options options; @@ -186,7 +235,7 @@ SUITE(Blob) std::vector buffer; buffer.resize(3 * 1024 * 1024); - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(buffer), azure::storage::access_condition(), options, m_context); concurrency::streams::container_buffer> output_buffer; @@ -206,7 +255,7 @@ SUITE(Blob) std::vector buffer; buffer.resize(2 * 1024 * 1024); - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(buffer), azure::storage::access_condition(), options, m_context); auto stream = m_blob.open_read(azure::storage::access_condition(), options, m_context); @@ -228,7 +277,7 @@ SUITE(Blob) std::vector buffer; buffer.resize(3 * 1024 * 1024); - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(buffer), azure::storage::access_condition(), options, m_context); auto stream = m_blob.open_read(azure::storage::access_condition(), options, m_context); @@ -296,7 +345,7 @@ SUITE(Blob) const size_t buffer_size = 16 * 1024; std::vector input_buffer; input_buffer.resize(buffer_size); - fill_buffer_and_get_md5(input_buffer); + fill_buffer(input_buffer); m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(input_buffer), azure::storage::access_condition(), options, m_context); auto blob_stream = m_blob.open_read(azure::storage::access_condition(), options, m_context); @@ -311,7 +360,7 @@ SUITE(Blob) const size_t buffer_size = 64 * 1024; std::vector input_buffer; input_buffer.resize(buffer_size); - fill_buffer_and_get_md5(input_buffer); + fill_buffer(input_buffer); m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(input_buffer), azure::storage::access_condition(), options, m_context); auto blob_stream = m_blob.open_read(azure::storage::access_condition(), options, m_context); @@ -353,7 +402,7 @@ SUITE(Blob) size_t attempts = 2; buffer.resize(1024); - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); stream.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); std::copy(buffer.begin(), buffer.end(), final_blob_contents.begin()); @@ -361,12 +410,12 @@ SUITE(Blob) stream.seek(5 * 1024); attempts++; - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); stream.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); std::copy(buffer.begin(), buffer.end(), final_blob_contents.begin() + 5 * 1024); - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); stream.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); std::copy(buffer.begin(), buffer.end(), final_blob_contents.begin() + 6 * 1024); @@ -374,7 +423,7 @@ SUITE(Blob) stream.seek(512); attempts++; - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); stream.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); std::copy(buffer.begin(), buffer.end(), final_blob_contents.begin() + 512); @@ -399,7 +448,7 @@ SUITE(Blob) std::vector buffer; buffer.resize(16 * 1024); - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(buffer), 0, azure::storage::access_condition(), options, m_context); CHECK_THROW(m_blob.open_write(azure::storage::access_condition(), options, m_context), std::logic_error); @@ -433,7 +482,7 @@ SUITE(Blob) missing_blob.open_write(azure::storage::access_condition::generate_if_none_match_condition(m_blob.properties().etag()), azure::storage::blob_request_options(), m_context).close().wait(); missing_blob = m_container.get_block_blob_reference(_XPLATSTR("missing_blob3")); - missing_blob.open_write(azure::storage::access_condition::generate_if_none_match_condition(_XPLATSTR("*")), azure::storage::blob_request_options(), m_context).close().wait(); + CHECK_THROW(missing_blob.open_write(azure::storage::access_condition::generate_if_none_match_condition(_XPLATSTR("*")), azure::storage::blob_request_options(), m_context).close().wait(), azure::storage::storage_exception); missing_blob = m_container.get_block_blob_reference(_XPLATSTR("missing_blob4")); missing_blob.open_write(azure::storage::access_condition::generate_if_modified_since_condition(m_blob.properties().last_modified() + utility::datetime::from_minutes(1)), azure::storage::blob_request_options(), m_context).close().wait(); @@ -448,10 +497,7 @@ SUITE(Blob) m_blob.open_write(azure::storage::access_condition::generate_if_none_match_condition(missing_blob.properties().etag()), azure::storage::blob_request_options(), m_context).close().wait(); CHECK_THROW(m_blob.open_write(azure::storage::access_condition::generate_if_none_match_condition(m_blob.properties().etag()), azure::storage::blob_request_options(), m_context), azure::storage::storage_exception); - - auto stream = m_blob.open_write(azure::storage::access_condition::generate_if_none_match_condition(_XPLATSTR("*")), azure::storage::blob_request_options(), m_context); - CHECK_THROW(stream.close().wait(), azure::storage::storage_exception); - + m_blob.open_write(azure::storage::access_condition::generate_if_modified_since_condition(m_blob.properties().last_modified() - utility::datetime::from_minutes(1)), azure::storage::blob_request_options(), m_context).close().wait(); CHECK_THROW(m_blob.open_write(azure::storage::access_condition::generate_if_modified_since_condition(m_blob.properties().last_modified() + utility::datetime::from_minutes(1)), azure::storage::blob_request_options(), m_context), azure::storage::storage_exception); @@ -460,15 +506,10 @@ SUITE(Blob) CHECK_THROW(m_blob.open_write(azure::storage::access_condition::generate_if_not_modified_since_condition(m_blob.properties().last_modified() - utility::datetime::from_minutes(1)), azure::storage::blob_request_options(), m_context), azure::storage::storage_exception); - stream = m_blob.open_write(azure::storage::access_condition::generate_if_match_condition(m_blob.properties().etag()), azure::storage::blob_request_options(), m_context); + auto stream = m_blob.open_write(azure::storage::access_condition::generate_if_match_condition(m_blob.properties().etag()), azure::storage::blob_request_options(), m_context); m_blob.upload_properties(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); CHECK_THROW(stream.close().wait(), azure::storage::storage_exception); - - missing_blob = m_container.get_block_blob_reference(_XPLATSTR("missing_blob6")); - stream = missing_blob.open_write(azure::storage::access_condition::generate_if_none_match_condition(_XPLATSTR("*")), azure::storage::blob_request_options(), m_context); - missing_blob.upload_block_list(std::vector(), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); - CHECK_THROW(stream.close().wait(), azure::storage::storage_exception); - + stream = m_blob.open_write(azure::storage::access_condition::generate_if_not_modified_since_condition(m_blob.properties().last_modified()), azure::storage::blob_request_options(), m_context); std::this_thread::sleep_for(std::chrono::seconds(1)); m_blob.upload_properties(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); @@ -482,7 +523,7 @@ SUITE(Blob) const size_t buffer_size = 16 * 1024; std::vector input_buffer; input_buffer.resize(buffer_size); - fill_buffer_and_get_md5(input_buffer); + fill_buffer(input_buffer); m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(input_buffer), 0, azure::storage::access_condition(), options, m_context); CHECK_EQUAL(buffer_size, m_blob.properties().size()); @@ -498,7 +539,7 @@ SUITE(Blob) const size_t buffer_size = 64 * 1024; std::vector input_buffer; input_buffer.resize(buffer_size); - fill_buffer_and_get_md5(input_buffer); + fill_buffer(input_buffer); m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(input_buffer), 0, azure::storage::access_condition(), options, m_context); CHECK_EQUAL(buffer_size, m_blob.properties().size()); @@ -514,7 +555,7 @@ SUITE(Blob) const size_t buffer_size = 16 * 1024; std::vector original_buffer; original_buffer.resize(buffer_size); - fill_buffer_and_get_md5(original_buffer); + fill_buffer(original_buffer); m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(original_buffer), 0, azure::storage::access_condition(), options, m_context); CHECK_EQUAL(buffer_size, m_blob.properties().size()); @@ -530,7 +571,7 @@ SUITE(Blob) const size_t buffer_size = 64 * 1024; std::vector original_buffer; original_buffer.resize(buffer_size); - fill_buffer_and_get_md5(original_buffer); + fill_buffer(original_buffer); m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(original_buffer), 0, azure::storage::access_condition(), options, m_context); CHECK_EQUAL(buffer_size, m_blob.properties().size()); diff --git a/Microsoft.WindowsAzure.Storage/tests/blob_test_base.cpp b/Microsoft.WindowsAzure.Storage/tests/blob_test_base.cpp index b68e7463..38b7423c 100644 --- a/Microsoft.WindowsAzure.Storage/tests/blob_test_base.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/blob_test_base.cpp @@ -20,6 +20,8 @@ #include "check_macros.h" #include "wascore/streams.h" +#include "wascore/util.h" + utility::string_t blob_service_test_base::fill_buffer_and_get_md5(std::vector& buffer) { @@ -28,15 +30,27 @@ utility::string_t blob_service_test_base::fill_buffer_and_get_md5(std::vector& buffer, size_t offset, size_t count) { - std::generate_n(buffer.begin(), buffer.size(), [] () -> uint8_t - { - return (uint8_t)(std::rand() % (int)UINT8_MAX); - }); + fill_buffer(buffer, offset, count); azure::storage::core::hash_provider provider = azure::storage::core::hash_provider::create_md5_hash_provider(); provider.write(buffer.data() + offset, count); provider.close(); - return provider.hash(); + return provider.hash().md5(); +} + +utility::string_t blob_service_test_base::fill_buffer_and_get_crc64(std::vector& buffer) +{ + return fill_buffer_and_get_crc64(buffer, 0, buffer.size()); +} + +utility::string_t blob_service_test_base::fill_buffer_and_get_crc64(std::vector& buffer, size_t offset, size_t count) +{ + fill_buffer(buffer, offset, count); + + azure::storage::core::hash_provider provider = azure::storage::core::hash_provider::create_crc64_hash_provider(); + provider.write(buffer.data() + offset, count); + provider.close(); + return provider.hash().crc64(); } utility::string_t blob_service_test_base::get_random_container_name(size_t length) @@ -45,11 +59,11 @@ utility::string_t blob_service_test_base::get_random_container_name(size_t lengt name.resize(length); std::generate_n(name.begin(), length, [] () -> utility::char_t { - const utility::char_t possible_chars[] = { _XPLATSTR("abcdefghijklmnopqrstuvwxyz1234567890") }; - return possible_chars[std::rand() % (sizeof(possible_chars) / sizeof(utility::char_t) - 1)]; + const utility::string_t possible_chars = _XPLATSTR("abcdefghijklmnopqrstuvwxyz1234567890"); + return possible_chars[get_random_int32() % possible_chars.length()]; }); - return utility::conversions::print_string(utility::datetime::utc_now().to_interval()) + name; + return azure::storage::core::convert_to_string(utility::datetime::utc_now().to_interval()) + name; } void test_base::check_parallelism(const azure::storage::operation_context& context, int expected_parallelism) @@ -88,8 +102,15 @@ void test_base::check_parallelism(const azure::storage::operation_context& conte } } - // TODO: Investigate why this is only 5 instead of 6 - CHECK_EQUAL(expected_parallelism, max_count); + // Sometimes when block size is small and the parallelism is relatively large, former requests might finish before later requests start. + if (expected_parallelism <= 2) + { + CHECK_EQUAL(expected_parallelism, max_count); + } + else + { + CHECK(max_count >= 2); + } } web::http::uri blob_service_test_base::defiddler(const web::http::uri& uri) @@ -114,7 +135,7 @@ void blob_service_test_base::check_blob_equal(const azure::storage::cloud_blob& CHECK_UTF8_EQUAL(expected.snapshot_qualified_uri().primary_uri().to_string(), actual.snapshot_qualified_uri().primary_uri().to_string()); CHECK_UTF8_EQUAL(expected.snapshot_qualified_uri().secondary_uri().to_string(), actual.snapshot_qualified_uri().secondary_uri().to_string()); check_blob_copy_state_equal(expected.copy_state(), actual.copy_state()); - check_blob_properties_equal(expected.properties(), actual.properties()); + check_blob_properties_equal(expected.properties(), actual.properties(), true); } void blob_service_test_base::check_blob_copy_state_equal(const azure::storage::copy_state& expected, const azure::storage::copy_state& actual) @@ -128,7 +149,7 @@ void blob_service_test_base::check_blob_copy_state_equal(const azure::storage::c CHECK_UTF8_EQUAL(expected.source().to_string(), actual.source().to_string()); } -void blob_service_test_base::check_blob_properties_equal(const azure::storage::cloud_blob_properties& expected, const azure::storage::cloud_blob_properties& actual) +void blob_service_test_base::check_blob_properties_equal(const azure::storage::cloud_blob_properties& expected, const azure::storage::cloud_blob_properties& actual, bool check_settable_only) { CHECK_UTF8_EQUAL(expected.etag(), actual.etag()); CHECK(expected.last_modified() == actual.last_modified()); @@ -138,5 +159,8 @@ void blob_service_test_base::check_blob_properties_equal(const azure::storage::c CHECK_UTF8_EQUAL(expected.content_language(), actual.content_language()); CHECK_UTF8_EQUAL(expected.content_md5(), actual.content_md5()); CHECK_UTF8_EQUAL(expected.content_type(), actual.content_type()); - CHECK(expected.server_encrypted() == actual.server_encrypted()); + if (!check_settable_only) + { + CHECK(expected.server_encrypted() == actual.server_encrypted()); + } } diff --git a/Microsoft.WindowsAzure.Storage/tests/blob_test_base.h b/Microsoft.WindowsAzure.Storage/tests/blob_test_base.h index ea92ee37..815c5716 100644 --- a/Microsoft.WindowsAzure.Storage/tests/blob_test_base.h +++ b/Microsoft.WindowsAzure.Storage/tests/blob_test_base.h @@ -25,14 +25,18 @@ #include "test_base.h" #include "was/blob.h" -const utility::string_t dummy_md5(_XPLATSTR("MDAwMDAwMDA=")); +const utility::string_t dummy_md5(_XPLATSTR("MDAwMDAwMDAwMDAwMDAwMA==")); +const uint64_t dummy_crc64_val(0x9588C743); +const utility::string_t dummy_crc64(_XPLATSTR("Q8eIlQAAAAA=")); class blob_service_test_base : public test_base { public: blob_service_test_base() - : m_client(test_config::instance().account().create_cloud_blob_client()) + : m_client(test_config::instance().account().create_cloud_blob_client()), + m_premium_client(test_config::instance().premium_account().create_cloud_blob_client()), + m_blob_storage_client(test_config::instance().blob_storage_account().create_cloud_blob_client()) { } @@ -44,16 +48,20 @@ class blob_service_test_base : public test_base static web::http::uri defiddler(const web::http::uri& uri); static utility::string_t fill_buffer_and_get_md5(std::vector& buffer); + static utility::string_t fill_buffer_and_get_crc64(std::vector& buffer); static utility::string_t fill_buffer_and_get_md5(std::vector& buffer, size_t offset, size_t count); + static utility::string_t fill_buffer_and_get_crc64(std::vector& buffer, size_t offset, size_t count); static utility::string_t get_random_container_name(size_t length = 10); static void check_blob_equal(const azure::storage::cloud_blob& expected, const azure::storage::cloud_blob& actual); static void check_blob_copy_state_equal(const azure::storage::copy_state& expected, const azure::storage::copy_state& actual); - static void check_blob_properties_equal(const azure::storage::cloud_blob_properties& expected, const azure::storage::cloud_blob_properties& actual); + static void check_blob_properties_equal(const azure::storage::cloud_blob_properties& expected, const azure::storage::cloud_blob_properties& actual, bool check_settable_only = false); std::vector list_all_containers(const utility::string_t& prefix, azure::storage::container_listing_details::values includes, int max_results, const azure::storage::blob_request_options& options); std::vector list_all_blobs_from_client(const utility::string_t& prefix, azure::storage::blob_listing_details::values includes, int max_results, const azure::storage::blob_request_options& options); azure::storage::cloud_blob_client m_client; + azure::storage::cloud_blob_client m_premium_client; + azure::storage::cloud_blob_client m_blob_storage_client; }; class temp_file : public blob_service_test_base @@ -128,7 +136,7 @@ class blob_service_test_base_with_objects_to_delete : public blob_service_test_b } protected: - void create_containers(const utility::string_t& prefix, std::size_t num); + void create_containers(const utility::string_t& prefix, std::size_t num, azure::storage::blob_container_public_access_type public_access_type = azure::storage::blob_container_public_access_type::off); void create_blobs(const azure::storage::cloud_blob_container& container, const utility::string_t& prefix, std::size_t num); void check_container_list(const std::vector& list, const utility::string_t& prefix, bool check_found); void check_blob_list(const std::vector& list); @@ -144,13 +152,17 @@ class container_test_base : public blob_service_test_base container_test_base() { m_container = m_client.get_container_reference(get_random_container_name()); + m_premium_container = m_premium_client.get_container_reference(get_random_container_name());/* manage create and delete in test case since it's not for all test cases*/ + m_blob_storage_container = m_blob_storage_client.get_container_reference(get_random_container_name());/* manage create and delete in test case since it's not for all test cases*/ } ~container_test_base() { try { - m_container.delete_container(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + m_container.delete_container_if_exists(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + m_premium_container.delete_container_if_exists(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + m_blob_storage_container.delete_container_if_exists(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); } catch (const azure::storage::storage_exception&) { @@ -161,10 +173,13 @@ class container_test_base : public blob_service_test_base void check_public_access(azure::storage::blob_container_public_access_type access); std::vector list_all_blobs(const utility::string_t& prefix, azure::storage::blob_listing_details::values includes, int max_results, const azure::storage::blob_request_options& options); + std::vector list_all_blobs(const azure::storage::cloud_blob_container & container, const utility::string_t & prefix, azure::storage::blob_listing_details::values includes, int max_results, const azure::storage::blob_request_options & options); void check_lease_access(azure::storage::cloud_blob_container& container, azure::storage::lease_state state, const utility::string_t& lease_id, bool fake, bool allow_delete); static void check_container_no_stale_property(azure::storage::cloud_blob_container& container); azure::storage::cloud_blob_container m_container; + azure::storage::cloud_blob_container m_premium_container; + azure::storage::cloud_blob_container m_blob_storage_container; }; class blob_test_base : public container_test_base @@ -189,6 +204,40 @@ class blob_test_base : public container_test_base static void check_blob_no_stale_property(azure::storage::cloud_blob& blob); }; +class premium_page_blob_test_base : public container_test_base +{ +public: + + premium_page_blob_test_base() + { + m_premium_container.create(azure::storage::blob_container_public_access_type::off, azure::storage::blob_request_options(), m_context); + m_blob = m_premium_container.get_page_blob_reference(_XPLATSTR("pageblob")); + } + + ~premium_page_blob_test_base() + { + } +protected: + azure::storage::cloud_page_blob m_blob; +}; + +class premium_block_blob_test_base : public container_test_base +{ +public: + + premium_block_blob_test_base() + { + m_blob_storage_container.create(azure::storage::blob_container_public_access_type::off, azure::storage::blob_request_options(), m_context); + m_blob = m_blob_storage_container.get_block_blob_reference(_XPLATSTR("blockblob")); + } + + ~premium_block_blob_test_base() + { + } +protected: + azure::storage::cloud_block_blob m_blob; +}; + class block_blob_test_base : public blob_test_base { public: diff --git a/Microsoft.WindowsAzure.Storage/tests/blob_versioning_test.cpp b/Microsoft.WindowsAzure.Storage/tests/blob_versioning_test.cpp new file mode 100644 index 00000000..fe0d3897 --- /dev/null +++ b/Microsoft.WindowsAzure.Storage/tests/blob_versioning_test.cpp @@ -0,0 +1,125 @@ +// ----------------------------------------------------------------------------------------- +// +// Copyright 2020 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +// +// ----------------------------------------------------------------------------------------- + +#include "stdafx.h" +#include "blob_test_base.h" +#include "check_macros.h" + +#include + +SUITE(Blob) +{ + TEST_FIXTURE(block_blob_test_base, blob_versioning_properties) + { + utility::string_t blob_content = _XPLATSTR("test"); + m_blob.upload_text(blob_content, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + auto version_id0 = m_blob.properties().version_id(); + CHECK(!version_id0.empty()); + + m_blob.download_attributes(); + CHECK(version_id0 == m_blob.properties().version_id()); + + m_blob.upload_text(blob_content, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + auto version_id1 = m_blob.properties().version_id(); + CHECK(version_id1 != version_id0); + + m_blob.metadata()[_XPLATSTR("k1")] = _XPLATSTR("value1"); + m_blob.upload_metadata(); + auto version_id2 = m_blob.properties().version_id(); + CHECK(version_id2 != version_id1); + + m_blob.create_snapshot(); + auto version_id3 = m_blob.properties().version_id(); + CHECK(version_id3 != version_id2); + + m_blob.properties().set_content_md5(utility::string_t()); + m_blob.upload_block_list(std::vector()); + auto version_id4 = m_blob.properties().version_id(); + CHECK(version_id4 != version_id3); + CHECK(utility::string_t() == m_blob.download_text()); + CHECK(version_id4 == m_blob.properties().version_id()); + + + m_blob.start_copy(m_blob.uri()); + auto version_id5 = m_blob.properties().version_id(); + CHECK(version_id5 != version_id4); + + auto blobs = m_container.list_blobs_segmented(utility::string_t(), true, azure::storage::blob_listing_details::none, 0, azure::storage::continuation_token(), azure::storage::blob_request_options(), azure::storage::operation_context()); + CHECK_EQUAL(1, blobs.results().size()); + CHECK(blobs.results()[0].is_blob()); + CHECK(blobs.results()[0].is_current_version()); + auto blob = blobs.results()[0].as_blob(); + CHECK(blob.version_id().empty()); + CHECK(!blob.properties().version_id().empty()); + + blobs = m_container.list_blobs_segmented(utility::string_t(), true, azure::storage::blob_listing_details::versions, 0, azure::storage::continuation_token(), azure::storage::blob_request_options(), azure::storage::operation_context()); + std::set versions; + + for (const auto& t : blobs.results()) + { + if (t.is_blob()) + { + versions.emplace(t.as_blob().version_id()); + } + } + CHECK(versions.find(version_id0) != versions.end()); + CHECK(versions.find(version_id1) != versions.end()); + CHECK(versions.find(version_id2) != versions.end()); + CHECK(versions.find(version_id3) != versions.end()); + CHECK(versions.find(version_id4) != versions.end()); + CHECK(versions.find(version_id5) != versions.end()); + + for (const auto& t : blobs.results()) + { + if (t.is_blob()) + { + blob = t.as_blob(); + CHECK(!blob.version_id().empty()); + if (t.is_current_version()) + { + CHECK(blob.version_id() == version_id5); + CHECK(blob.properties().version_id() == version_id5); + } + + if (blob.version_id() == version_id0) + { + azure::storage::cloud_block_blob block_blob(blob); + CHECK(blob_content == block_blob.download_text()); + + blob.download_attributes(); + CHECK(blob.metadata() == azure::storage::cloud_metadata()); + } + } + } + + m_blob.delete_blob(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), azure::storage::blob_request_options(), azure::storage::operation_context()); + + blobs = m_container.list_blobs_segmented(utility::string_t(), true, azure::storage::blob_listing_details::versions, 0, azure::storage::continuation_token(), azure::storage::blob_request_options(), azure::storage::operation_context()); + CHECK(!blobs.results().empty()); + for (const auto&t : blobs.results()) + { + if (t.is_blob()) + { + CHECK(!t.is_current_version()); + blob = t.as_blob(); + blob.delete_blob(); + } + } + blobs = m_container.list_blobs_segmented(utility::string_t(), true, azure::storage::blob_listing_details::versions, 0, azure::storage::continuation_token(), azure::storage::blob_request_options(), azure::storage::operation_context()); + CHECK(blobs.results().empty()); + } +} diff --git a/Microsoft.WindowsAzure.Storage/tests/check_macros.h b/Microsoft.WindowsAzure.Storage/tests/check_macros.h index 0db3330b..0f8900c0 100644 --- a/Microsoft.WindowsAzure.Storage/tests/check_macros.h +++ b/Microsoft.WindowsAzure.Storage/tests/check_macros.h @@ -42,3 +42,13 @@ } \ } \ } while(0) + +#define CHECK_NOTHROW(expression) \ + do \ + { \ + bool exception_thrown = false; \ + try { expression; } \ + catch (...) { exception_thrown = true; } \ + if (exception_thrown) \ + UnitTest::CurrentTest::Results()->OnTestFailure(UnitTest::TestDetails(*UnitTest::CurrentTest::Details(), __LINE__), "Expected no exception thrown: \"" #expression "\"."); \ + } while(0) diff --git a/Microsoft.WindowsAzure.Storage/tests/checksum_test.cpp b/Microsoft.WindowsAzure.Storage/tests/checksum_test.cpp new file mode 100644 index 00000000..97421346 --- /dev/null +++ b/Microsoft.WindowsAzure.Storage/tests/checksum_test.cpp @@ -0,0 +1,113 @@ +// ----------------------------------------------------------------------------------------- +// +// Copyright 2019 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +// +// ----------------------------------------------------------------------------------------- + +#include "stdafx.h" +#include "check_macros.h" +#include "was/core.h" + +SUITE(Core) +{ + TEST(checksum_class) + { + CHECK(azure::storage::checksum_type::none == azure::storage::checksum_none_t::value); + CHECK(azure::storage::checksum_type::md5 == azure::storage::checksum_md5_t::value); + CHECK(azure::storage::checksum_type::crc64 == azure::storage::checksum_crc64_t::value); + CHECK(azure::storage::checksum_type::hmac_sha256 == azure::storage::checksum_hmac_sha256_t::value); + CHECK(azure::storage::checksum_type::none == azure::storage::checksum_none.value); + CHECK(azure::storage::checksum_type::md5 == azure::storage::checksum_md5.value); + CHECK(azure::storage::checksum_type::crc64 == azure::storage::checksum_crc64.value); + CHECK(azure::storage::checksum_type::hmac_sha256 == azure::storage::checksum_hmac_sha256.value); + + const utility::char_t* md5_cstr = _XPLATSTR("1B2M2Y8AsgTpgAmY7PhCfg=="); + const utility::string_t md5_str(md5_cstr); + const uint64_t crc64_val = 0x0; + const utility::string_t crc64_str(_XPLATSTR("AAAAAAAAAAA=")); + const utility::string_t hmac_sha256_str(_XPLATSTR("H3MaxXPHmTz2iCz6XIggaMFNXVI0gCqYsU/BChVkrHE=")); + + { + azure::storage::checksum cs; + CHECK(!cs.is_md5()); + CHECK(!cs.is_crc64()); + CHECK(!cs.is_hmac_sha256()); + CHECK(cs.empty()); + } + { + azure::storage::checksum cs(azure::storage::checksum_none); + CHECK(!cs.is_md5()); + CHECK(!cs.is_crc64()); + CHECK(!cs.is_hmac_sha256()); + CHECK(cs.empty()); + } + { + // For backward compatibility. + utility::string_t empty_string; + azure::storage::checksum cs(empty_string); + CHECK(!cs.is_md5()); + CHECK(!cs.is_crc64()); + CHECK(!cs.is_hmac_sha256()); + CHECK(cs.empty()); + } + { + azure::storage::checksum cs(md5_str); + CHECK(cs.is_md5()); + CHECK(!cs.is_crc64()); + CHECK(!cs.is_hmac_sha256()); + CHECK(!cs.empty()); + CHECK_UTF8_EQUAL(cs.md5(), md5_str); + } + { + azure::storage::checksum cs(md5_cstr); + CHECK(cs.is_md5()); + CHECK(!cs.is_crc64()); + CHECK(!cs.is_hmac_sha256()); + CHECK(!cs.empty()); + CHECK_UTF8_EQUAL(cs.md5(), md5_str); + } + { + azure::storage::checksum cs(azure::storage::checksum_md5, md5_str); + CHECK(cs.is_md5()); + CHECK(!cs.is_crc64()); + CHECK(!cs.is_hmac_sha256()); + CHECK(!cs.empty()); + CHECK_UTF8_EQUAL(cs.md5(), md5_str); + } + { + azure::storage::checksum cs(crc64_val); + CHECK(!cs.is_md5()); + CHECK(cs.is_crc64()); + CHECK(!cs.is_hmac_sha256()); + CHECK(!cs.empty()); + CHECK_UTF8_EQUAL(cs.crc64(), crc64_str); + } + { + azure::storage::checksum cs(azure::storage::checksum_crc64, crc64_val); + CHECK(!cs.is_md5()); + CHECK(cs.is_crc64()); + CHECK(!cs.is_hmac_sha256()); + CHECK(!cs.empty()); + CHECK_UTF8_EQUAL(cs.crc64(), crc64_str); + } + { + azure::storage::checksum cs(azure::storage::checksum_hmac_sha256, hmac_sha256_str); + CHECK(!cs.is_md5()); + CHECK(!cs.is_crc64()); + CHECK(cs.is_hmac_sha256()); + CHECK(!cs.empty()); + CHECK_UTF8_EQUAL(cs.hmac_sha256(), hmac_sha256_str); + } + } +} \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/tests/cloud_append_blob_test.cpp b/Microsoft.WindowsAzure.Storage/tests/cloud_append_blob_test.cpp index b911ecf1..76061efb 100644 --- a/Microsoft.WindowsAzure.Storage/tests/cloud_append_blob_test.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/cloud_append_blob_test.cpp @@ -20,6 +20,7 @@ #include "check_macros.h" #include "cpprest/producerconsumerstream.h" +#include "was/crc64.h" #include "wascore/constants.h" #pragma region Fixture @@ -36,29 +37,37 @@ SUITE(Blob) azure::storage::blob_request_options options; utility::string_t md5_header; - m_context.set_sending_request([&md5_header](web::http::http_request& request, azure::storage::operation_context) + utility::string_t crc64_header; + m_context.set_sending_request([&md5_header, &crc64_header](web::http::http_request& request, azure::storage::operation_context) { if (!request.headers().match(web::http::header_names::content_md5, md5_header)) { md5_header.clear(); } + if (!request.headers().match(azure::storage::protocol::ms_header_content_crc64, crc64_header)) + { + crc64_header.clear(); + } }); m_blob.create_or_replace(azure::storage::access_condition(), options, m_context); check_blob_no_stale_property(m_blob); options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(false); for (uint16_t i = 0; i < 3; ++i) { - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); auto stream = concurrency::streams::bytestream::open_istream(buffer); int64_t offset = m_blob.append_block(stream, utility::string_t(), azure::storage::access_condition(), options, m_context); CHECK_UTF8_EQUAL(utility::string_t(), md5_header); + CHECK_UTF8_EQUAL(utility::string_t(), crc64_header); CHECK_EQUAL(i * buffer_size, offset); CHECK_EQUAL(i + 1, m_blob.properties().append_blob_committed_block_count()); } options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(false); for (uint16_t i = 3; i < 6; ++i) { auto md5 = fill_buffer_and_get_md5(buffer); @@ -70,6 +79,7 @@ SUITE(Blob) } options.set_use_transactional_md5(true); + options.set_use_transactional_crc64(false); for (uint16_t i = 6; i < 9; ++i) { auto md5 = fill_buffer_and_get_md5(buffer); @@ -80,19 +90,53 @@ SUITE(Blob) CHECK_EQUAL(i + 1, m_blob.properties().append_blob_committed_block_count()); } + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(true); + for (uint16_t i = 9; i < 12; ++i) + { + auto crc64 = fill_buffer_and_get_crc64(buffer); + auto stream = concurrency::streams::bytestream::open_istream(buffer); + int64_t offset = m_blob.append_block(stream, azure::storage::checksum_none, azure::storage::access_condition(), options, m_context); + CHECK_UTF8_EQUAL(crc64, crc64_header); + CHECK_EQUAL(i * buffer_size, offset); + CHECK_EQUAL(i + 1, m_blob.properties().append_blob_committed_block_count()); + } + + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(false); + for (uint16_t i = 12; i < 15; ++i) + { + auto crc64 = fill_buffer_and_get_crc64(buffer); + uint64_t crc64_val = azure::storage::crc64(buffer.data(), buffer.size()); + auto stream = concurrency::streams::bytestream::open_istream(buffer); + int64_t offset = m_blob.append_block(stream, crc64_val, azure::storage::access_condition(), options, m_context); + CHECK_UTF8_EQUAL(crc64, crc64_header); + CHECK_EQUAL(i * buffer_size, offset); + CHECK_EQUAL(i + 1, m_blob.properties().append_blob_committed_block_count()); + } + // block stream with length = 0 options.set_use_transactional_md5(true); - fill_buffer_and_get_md5(buffer); + options.set_use_transactional_crc64(false); + fill_buffer(buffer); auto stream1 = concurrency::streams::bytestream::open_istream(buffer); stream1.seek(buffer.size()); CHECK_THROW(m_blob.append_block(stream1, utility::string_t(), azure::storage::access_condition(), options, m_context), azure::storage::storage_exception); options.set_use_transactional_md5(true); - fill_buffer_and_get_md5(buffer); + options.set_use_transactional_crc64(false); + fill_buffer(buffer); auto stream = concurrency::streams::bytestream::open_istream(buffer); CHECK_THROW(m_blob.append_block(stream, dummy_md5, azure::storage::access_condition(), options, m_context), azure::storage::storage_exception); CHECK_UTF8_EQUAL(dummy_md5, md5_header); + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(true); + fill_buffer(buffer); + stream = concurrency::streams::bytestream::open_istream(buffer); + CHECK_THROW(m_blob.append_block(stream, dummy_crc64_val, azure::storage::access_condition(), options, m_context), azure::storage::storage_exception); + CHECK_UTF8_EQUAL(dummy_crc64, crc64_header); + m_context.set_sending_request(std::function()); } @@ -106,11 +150,12 @@ SUITE(Blob) m_blob.create_or_replace(azure::storage::access_condition(), options, m_context); check_blob_no_stale_property(m_blob); - size_t sizes[] = { 1, 2, 1023, 1024, 4 * 1024, 1024 * 1024, azure::storage::protocol::max_block_size - 1, azure::storage::protocol::max_block_size }; - size_t invalid_sizes[] = { azure::storage::protocol::max_block_size + 1, 6 * 1024 * 1024, 8 * 1024 * 1024 }; + size_t sizes[] = { 1, 2, 1023, 1024, 4 * 1024, 1024 * 1024, azure::storage::protocol::max_append_block_size - 1, azure::storage::protocol::max_append_block_size }; + size_t invalid_sizes[] = { azure::storage::protocol::max_append_block_size + 1, 6 * 1024 * 1024, 8 * 1024 * 1024 }; int64_t bytes_appended = 0; options.set_use_transactional_md5(true); + options.set_use_transactional_crc64(false); for (size_t size : sizes) { buffer.resize(size); @@ -123,12 +168,26 @@ SUITE(Blob) } options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(true); + for (size_t size : sizes) + { + buffer.resize(size); + fill_buffer(buffer, 0, size); + auto stream = concurrency::streams::bytestream::open_istream(buffer); + int64_t offset = m_blob.append_block(stream, azure::storage::checksum_none, azure::storage::access_condition(), options, m_context); + CHECK_EQUAL(bytes_appended, offset); + + bytes_appended += size; + } + + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(false); for (size_t size : invalid_sizes) { buffer.resize(size); - fill_buffer_and_get_md5(buffer, 0, size); + fill_buffer(buffer, 0, size); auto stream = concurrency::streams::bytestream::open_istream(buffer); - CHECK_THROW(m_blob.append_block(stream, utility::string_t(), azure::storage::access_condition(), options, m_context), azure::storage::storage_exception); + CHECK_THROW(m_blob.append_block(stream, azure::storage::checksum_none, azure::storage::access_condition(), options, m_context), std::invalid_argument); } } @@ -137,7 +196,7 @@ SUITE(Blob) const size_t buffer_size = 64 * 1024; std::vector buffer; buffer.resize(buffer_size); - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); azure::storage::blob_request_options options; options.set_use_transactional_md5(false); @@ -172,7 +231,7 @@ SUITE(Blob) for (uint16_t i = 0; i < 3; ++i) { - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); auto stream = concurrency::streams::bytestream::open_istream(buffer); int64_t offset = m_blob.append_block(stream, utility::string_t(), azure::storage::access_condition(), options, m_context); block_count++; @@ -306,7 +365,7 @@ SUITE(Blob) { std::vector buffer; buffer.resize(4 * 1024 * 1024); - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); std::copy(buffer.begin(), buffer.end(), file_buffer.begin() + buffer_offsets1[i]); auto stream = concurrency::streams::bytestream::open_istream(buffer); @@ -324,7 +383,7 @@ SUITE(Blob) { std::vector buffer; buffer.resize(4 * 1024 * 1024); - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); std::copy(buffer.begin(), buffer.begin() + 2 * 1024 * 1024, file_buffer.begin() + buffer_offsets2[i]); auto stream = concurrency::streams::bytestream::open_istream(buffer); @@ -341,7 +400,7 @@ SUITE(Blob) { std::vector buffer; buffer.resize(5 * 1024 * 1024); - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); std::copy(buffer.begin(), buffer.end(), file_buffer.begin() + buffer_offsets3[i]); // create a temporary test file @@ -498,16 +557,16 @@ SUITE(Blob) TEST_FIXTURE(append_blob_test_base, append_blob_upload_max_size_condition) { const size_t buffer_size = 1024 * 1024; - + std::vector buffer; buffer.resize(buffer_size); - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); concurrency::streams::istream stream = concurrency::streams::bytestream::open_istream(buffer); - + auto condition = azure::storage::access_condition::generate_if_max_size_less_than_or_equal_condition(512); CHECK_THROW(m_blob.upload_from_stream(stream, condition, azure::storage::blob_request_options(), m_context), std::invalid_argument); } - + TEST_FIXTURE(append_blob_test_base, append_block_stale_properties) { azure::storage::blob_request_options options; @@ -534,4 +593,720 @@ SUITE(Blob) m_blob.download_attributes(azure::storage::access_condition::generate_lease_condition(lease_id), options, op); m_blob.delete_blob(azure::storage::delete_snapshots_option::none, azure::storage::access_condition::generate_lease_condition(lease_id), options, op); } + + TEST_FIXTURE(append_blob_test_base, append_blob_create_delete_cancellation) + { + + { + // cancel the cancellation prior to the operation + auto cancel_token_src = pplx::cancellation_token_source(); + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.create_or_replace_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + // cancel the cancellation during the operation + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.create_or_replace_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(3)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.create_or_replace_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + task_result.get(); + // cancel the cancellation after the operation + cancel_token_src.cancel(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + + { + // cancel the cancellation prior to the operation + auto cancel_token_src = pplx::cancellation_token_source(); + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + // cancel the cancellation during the operation + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(3)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + task_result.get(); + // cancel the cancellation after the operation + cancel_token_src.cancel(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + + } + + TEST_FIXTURE(append_blob_test_base, append_blob_create_delete_timeout) + { + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(1)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.create_or_replace_async(azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(3)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.create_or_replace_async(azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(10000)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.create_or_replace_async(azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(1)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(3)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(10000)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + } + + TEST_FIXTURE(append_blob_test_base, append_blob_create_cancellation_timeout) + { + { + //when cancellation first + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(100)); + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.create_or_replace_async(azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + cancel_token_src.cancel(); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + //when timeout first + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(10)); + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.create_or_replace_async(azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(30)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + } + + TEST_FIXTURE(append_blob_test_base, append_blob_open_read_write_cancellation) + { + std::vector buffer; + buffer.resize(4 * 1024 * 1024); + fill_buffer(buffer); + + { + // cancel the cancellation prior to the operation + auto cancel_token_src = pplx::cancellation_token_source(); + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(true, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + // cancel the cancellation prior to the operation and write to a canceled ostream. + auto cancel_token_src = pplx::cancellation_token_source(); + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(true, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + // cancel the cancellation during the operation + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(true, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + std::this_thread::sleep_for(std::chrono::milliseconds(10)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(true, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + // cancel the cancellation after the operation + cancel_token_src.cancel(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + + m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(buffer)); + + { + auto cancel_token_src = pplx::cancellation_token_source(); + // cancel the cancellation prior to the operation + cancel_token_src.cancel(); + + std::string ex_msg; + + + try + { + auto task_result = m_blob.open_read_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + auto is = task_result.get(); + is.read().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + // cancel the cancellation during the operation + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_read_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(10)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + auto is = task_result.get(); + is.read().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_read_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(10)); //sleep for sometime before canceling the request and see result. + auto is = task_result.get(); + is.read().get(); + // cancel the cancellation after the operation + cancel_token_src.cancel(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + + } + + TEST_FIXTURE(append_blob_test_base, append_blob_open_read_write_timeout) + { + std::vector buffer; + buffer.resize(4 * 1024 * 1024); + fill_buffer(buffer); + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(1)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(true, azure::storage::access_condition(), options, m_context); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(20)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(true, azure::storage::access_condition(), options, m_context); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::seconds(30)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(true, azure::storage::access_condition(), options, m_context); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + + m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(buffer)); + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(1)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_read_async(azure::storage::access_condition(), options, m_context); + auto is = task_result.get(); + is.read().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(20)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_read_async(azure::storage::access_condition(), options, m_context); + auto is = task_result.get(); + is.read().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::seconds(30)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_read_async(azure::storage::access_condition(), options, m_context); + auto is = task_result.get(); + is.read().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + } + + TEST_FIXTURE(append_blob_test_base, append_blob_open_read_write_cancellation_timeout) + { + std::vector buffer; + buffer.resize(4 * 1024 * 1024); + fill_buffer(buffer); + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(10)); + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(true, azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + std::this_thread::sleep_for(std::chrono::milliseconds(10)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + } + + TEST_FIXTURE(append_blob_test_base, append_blob_concurrent_upload_cancellation_timeout) + { + utility::size64_t length = 260 * 1024 * 1024; + std::vector buffer; + buffer.resize(length); + fill_buffer(buffer); + + { + auto cancel_token_src = pplx::cancellation_token_source(); + auto options = azure::storage::blob_request_options(); + options.set_parallelism_factor(4); + options.set_maximum_execution_time(std::chrono::milliseconds(1000)); + // cancel the cancellation prior to the operation + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.upload_from_stream_async(concurrency::streams::bytestream::open_istream(buffer), length, azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + auto options = azure::storage::blob_request_options(); + options.set_parallelism_factor(4); + options.set_maximum_execution_time(std::chrono::milliseconds(1000)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.upload_from_stream_async(concurrency::streams::bytestream::open_istream(buffer), length, azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(300)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_parallelism_factor(4); + options.set_maximum_execution_time(std::chrono::milliseconds(1000)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.upload_from_stream_async(concurrency::streams::bytestream::open_istream(buffer), length, azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + } + + TEST_FIXTURE(append_blob_test_base, append_blob_cpkv) + { + utility::size64_t length = 128 * 1024; + std::vector buffer(length); + fill_buffer(buffer); + auto empty_options = azure::storage::blob_request_options(); + auto cpk_options = azure::storage::blob_request_options(); + std::vector key(32); + fill_buffer(key); + cpk_options.set_encryption_key(key); + + m_blob.create_or_replace(azure::storage::access_condition(), cpk_options, m_context); + + CHECK_THROW(m_blob.exists(empty_options, m_context), azure::storage::storage_exception); + m_blob.exists(cpk_options, m_context); + CHECK_THROW(m_blob.append_block(concurrency::streams::bytestream::open_istream(buffer), azure::storage::checksum_none, azure::storage::access_condition(), empty_options, m_context), azure::storage::storage_exception); + m_blob.append_block(concurrency::streams::bytestream::open_istream(buffer), azure::storage::checksum_none, azure::storage::access_condition(), cpk_options, m_context); + CHECK_THROW(m_blob.append_text(_XPLATSTR("Hello world"), azure::storage::access_condition(), empty_options, m_context), azure::storage::storage_exception); + m_blob.append_text(_XPLATSTR("Hello world"), azure::storage::access_condition(), cpk_options, m_context); + } } diff --git a/Microsoft.WindowsAzure.Storage/tests/cloud_blob_client_test.cpp b/Microsoft.WindowsAzure.Storage/tests/cloud_blob_client_test.cpp index 930be73e..6d8e7018 100644 --- a/Microsoft.WindowsAzure.Storage/tests/cloud_blob_client_test.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/cloud_blob_client_test.cpp @@ -19,17 +19,19 @@ #include "blob_test_base.h" #include "check_macros.h" +#include "wascore/util.h" + #pragma region Fixture -void blob_service_test_base_with_objects_to_delete::create_containers(const utility::string_t& prefix, std::size_t num) +void blob_service_test_base_with_objects_to_delete::create_containers(const utility::string_t& prefix, std::size_t num, azure::storage::blob_container_public_access_type public_access_type) { for (std::size_t i = 0; i < num; ++i) { - auto index = utility::conversions::print_string(i); + auto index = azure::storage::core::convert_to_string(i); auto container = m_client.get_container_reference(prefix + index); m_containers_to_delete.push_back(container); container.metadata()[_XPLATSTR("index")] = index; - container.create(azure::storage::blob_container_public_access_type::off, azure::storage::blob_request_options(), m_context); + container.create(public_access_type, azure::storage::blob_request_options(), m_context); } } @@ -37,7 +39,7 @@ void blob_service_test_base_with_objects_to_delete::create_blobs(const azure::st { for (std::size_t i = 0; i < num; i++) { - auto index = utility::conversions::print_string(i); + auto index = azure::storage::core::convert_to_string(i); auto blob = container.get_block_blob_reference(prefix + index); m_blobs_to_delete.push_back(blob); blob.upload_text(_XPLATSTR("test"), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); @@ -64,6 +66,7 @@ void blob_service_test_base_with_objects_to_delete::check_container_list(const s auto index_str = list_iter->metadata().find(_XPLATSTR("index")); CHECK(index_str != list_iter->metadata().end()); CHECK_UTF8_EQUAL(iter->name(), prefix + index_str->second); + CHECK(list_iter->properties().public_access() == iter->properties().public_access()); containers.erase(iter); found = true; break; @@ -146,14 +149,43 @@ SUITE(Blob) CHECK(container.properties().lease_status() == azure::storage::lease_status::unspecified); CHECK(container.properties().lease_state() == azure::storage::lease_state::unspecified); CHECK(container.properties().lease_duration() == azure::storage::lease_duration::unspecified); + CHECK(container.properties().public_access() == azure::storage::blob_container_public_access_type::off); CHECK(container.is_valid()); } + TEST_FIXTURE(blob_service_test_base, download_account_properties_service) + { + auto properties = m_client.download_account_properties(); + CHECK((properties.sku_name() == _XPLATSTR("Standard_RAGRS")) || (properties.sku_name() == _XPLATSTR("Standard_LRS"))); + CHECK((properties.account_kind() == _XPLATSTR("Storage")) || (properties.account_kind() == _XPLATSTR("StorageV2"))); + + properties = m_premium_client.download_account_properties(); + CHECK((properties.sku_name() == _XPLATSTR("Premium_RAGRS")) || (properties.sku_name() == _XPLATSTR("Premium_LRS"))); + CHECK((properties.account_kind() == _XPLATSTR("Storage")) || (properties.account_kind() == _XPLATSTR("StorageV2"))); + + properties = m_blob_storage_client.download_account_properties(); + CHECK((properties.sku_name() == _XPLATSTR("Standard_RAGRS")) || (properties.sku_name() == _XPLATSTR("Standard_LRS"))); + CHECK(properties.account_kind() == _XPLATSTR("BlobStorage")); + + azure::storage::account_shared_access_policy access_policy; + access_policy.set_expiry(utility::datetime::utc_now() + utility::datetime::from_minutes(30)); + access_policy.set_permissions(azure::storage::account_shared_access_policy::read); + access_policy.set_service_type(azure::storage::account_shared_access_policy::service_types::blob); + access_policy.set_resource_type(azure::storage::account_shared_access_policy::resource_types::service); + + azure::storage::storage_credentials account_sas_credentials(test_config::instance().account().get_shared_access_signature(access_policy)); + azure::storage::cloud_blob_client sas_client(test_config::instance().account().blob_endpoint(), account_sas_credentials); + + properties = sas_client.download_account_properties(); + CHECK((properties.sku_name() == _XPLATSTR("Standard_RAGRS")) || (properties.sku_name() == _XPLATSTR("Standard_LRS"))); + CHECK((properties.account_kind() == _XPLATSTR("Storage")) || (properties.account_kind() == _XPLATSTR("StorageV2"))); + } + TEST_FIXTURE(blob_service_test_base_with_objects_to_delete, list_containers_with_prefix) { auto prefix = get_random_container_name(); - - create_containers(prefix, 1); + + create_containers(prefix, 1, get_random_enum(azure::storage::blob_container_public_access_type::blob)); auto listing = list_all_containers(prefix, azure::storage::container_listing_details::all, 1, azure::storage::blob_request_options()); @@ -164,17 +196,17 @@ SUITE(Blob) { auto prefix = get_random_container_name(); - create_containers(prefix, 1); + create_containers(prefix, 1, get_random_enum(azure::storage::blob_container_public_access_type::blob)); auto listing = list_all_containers(utility::string_t(), azure::storage::container_listing_details::all, 5001, azure::storage::blob_request_options()); - + check_container_list(listing, prefix, false); } TEST_FIXTURE(blob_service_test_base_with_objects_to_delete, list_containers_with_continuation_token) { auto prefix = get_random_container_name(); - create_containers(prefix, 10); + create_containers(prefix, 10, get_random_enum(azure::storage::blob_container_public_access_type::blob)); std::vector listing; azure::storage::continuation_token token; @@ -214,7 +246,7 @@ SUITE(Blob) for (int i = 0; i < 3; i++) { - auto index = utility::conversions::print_string(i); + auto index = azure::storage::core::convert_to_string(i); auto blob = m_container.get_block_blob_reference(prefix + index); blob.upload_text(_XPLATSTR("test"), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); blobs.push_back(blob); @@ -222,7 +254,7 @@ SUITE(Blob) for (int i = 0; i < 2; i++) { - auto index = utility::conversions::print_string(i); + auto index = azure::storage::core::convert_to_string(i); auto blob = m_container.get_block_blob_reference(prefix2 + index); blob.upload_text(_XPLATSTR("test2"), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); blobs2.push_back(blob); @@ -275,4 +307,66 @@ SUITE(Blob) client.set_authentication_scheme(azure::storage::authentication_scheme::shared_key_lite); client.list_containers_segmented(utility::string_t(), azure::storage::container_listing_details::none, 1, azure::storage::continuation_token(), azure::storage::blob_request_options(), m_context); } + + TEST_FIXTURE(blob_test_base, list_containers_cancellation_timeout) + { + { + auto cancel_token_src = pplx::cancellation_token_source(); + // cancel the cancellation prior to the operation + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = m_client.list_containers_segmented_async(_XPLATSTR(""), azure::storage::container_listing_details::none, 100000, azure::storage::continuation_token(), azure::storage::blob_request_options(), azure::storage::operation_context(), cancel_token_src.get_token()); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(1)); + + std::string ex_msg; + + try + { + auto task_result = m_client.list_containers_segmented_async(_XPLATSTR(""), azure::storage::container_listing_details::none, 100000, azure::storage::continuation_token(), options, azure::storage::operation_context()); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_client.list_containers_segmented_async(_XPLATSTR(""), azure::storage::container_listing_details::none, 100000, azure::storage::continuation_token(), azure::storage::blob_request_options(), azure::storage::operation_context(), cancel_token_src.get_token()); + task_result.get(); + // cancel the cancellation after the operation + cancel_token_src.cancel(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + } } diff --git a/Microsoft.WindowsAzure.Storage/tests/cloud_blob_container_test.cpp b/Microsoft.WindowsAzure.Storage/tests/cloud_blob_container_test.cpp index 683d8e42..de23c468 100644 --- a/Microsoft.WindowsAzure.Storage/tests/cloud_blob_container_test.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/cloud_blob_container_test.cpp @@ -19,6 +19,9 @@ #include "blob_test_base.h" #include "check_macros.h" +#include "wascore/util.h" +#include "cpprest/asyncrt_utils.h" + #pragma region Fixture void container_test_base::check_public_access(azure::storage::blob_container_public_access_type access) @@ -69,6 +72,22 @@ std::vector container_test_base::list_all_blobs(cons return blobs; } +std::vector container_test_base::list_all_blobs(const azure::storage::cloud_blob_container& container, const utility::string_t& prefix, azure::storage::blob_listing_details::values includes, int max_results, const azure::storage::blob_request_options& options) +{ + std::vector blobs; + azure::storage::list_blob_item_iterator end_of_result; + auto iter = container.list_blobs(prefix, true, includes, max_results, options, m_context); + for (; iter != end_of_result; ++iter) + { + if (iter->is_blob()) + { + blobs.push_back(iter->as_blob()); + } + } + + return blobs; +} + #pragma endregion SUITE(Blob) @@ -92,6 +111,24 @@ SUITE(Blob) CHECK_UTF8_EQUAL(m_container.uri().secondary_uri().to_string(), directory.container().uri().secondary_uri().to_string()); } + TEST_FIXTURE(container_test_base, download_account_properties_container) + { + auto properties = m_container.download_account_properties(); + CHECK(!properties.sku_name().empty()); + CHECK(!properties.account_kind().empty()); + + azure::storage::blob_shared_access_policy access_policy; + access_policy.set_expiry(utility::datetime::utc_now() + utility::datetime::from_minutes(30)); + access_policy.set_permissions(azure::storage::account_shared_access_policy::read); + + azure::storage::storage_credentials sas_credentials(m_container.get_shared_access_signature(access_policy)); + azure::storage::cloud_blob_container sas_container(m_container.uri(), sas_credentials); + + properties = sas_container.download_account_properties(); + CHECK(!properties.sku_name().empty()); + CHECK(!properties.account_kind().empty()); + } + TEST_FIXTURE(container_test_base, container_create_delete) { CHECK(!m_container.exists(azure::storage::blob_request_options(), m_context)); @@ -108,6 +145,7 @@ SUITE(Blob) TEST_FIXTURE(container_test_base, container_create_public_off) { m_container.create(azure::storage::blob_container_public_access_type::off, azure::storage::blob_request_options(), m_context); + CHECK(m_container.properties().public_access() == azure::storage::blob_container_public_access_type::off); check_public_access(azure::storage::blob_container_public_access_type::off); check_container_no_stale_property(m_container); } @@ -115,6 +153,7 @@ SUITE(Blob) TEST_FIXTURE(container_test_base, container_create_public_blob) { m_container.create(azure::storage::blob_container_public_access_type::blob, azure::storage::blob_request_options(), m_context); + CHECK(m_container.properties().public_access() == azure::storage::blob_container_public_access_type::blob); check_public_access(azure::storage::blob_container_public_access_type::blob); check_container_no_stale_property(m_container); } @@ -122,6 +161,7 @@ SUITE(Blob) TEST_FIXTURE(container_test_base, container_create_public_container) { m_container.create(azure::storage::blob_container_public_access_type::container, azure::storage::blob_request_options(), m_context); + CHECK(m_container.properties().public_access() == azure::storage::blob_container_public_access_type::container); check_public_access(azure::storage::blob_container_public_access_type::container); check_container_no_stale_property(m_container); } @@ -150,12 +190,14 @@ SUITE(Blob) // Create with 2 pairs m_container.metadata()[_XPLATSTR("key1")] = _XPLATSTR("value1"); m_container.metadata()[_XPLATSTR("key2")] = _XPLATSTR("value2"); - m_container.create(azure::storage::blob_container_public_access_type::off, azure::storage::blob_request_options(), m_context); + m_container.create(get_random_enum(azure::storage::blob_container_public_access_type::blob), azure::storage::blob_request_options(), m_context); check_container_no_stale_property(m_container); auto same_container = m_client.get_container_reference(m_container.name()); + CHECK(same_container.properties().public_access() == azure::storage::blob_container_public_access_type::off); CHECK(same_container.metadata().empty()); same_container.download_attributes(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + CHECK(same_container.properties().public_access() == m_container.properties().public_access()); CHECK_EQUAL(2U, same_container.metadata().size()); CHECK_UTF8_EQUAL(_XPLATSTR("value1"), same_container.metadata()[_XPLATSTR("key1")]); CHECK_UTF8_EQUAL(_XPLATSTR("value2"), same_container.metadata()[_XPLATSTR("key2")]); @@ -192,7 +234,7 @@ SUITE(Blob) for (int i = 0; i < 4; i++) { - auto index = utility::conversions::print_string(i); + auto index = azure::storage::core::convert_to_string(i); auto blob = m_container.get_block_blob_reference(_XPLATSTR("blockblob") + index); blob.metadata()[_XPLATSTR("index")] = index; @@ -205,7 +247,7 @@ SUITE(Blob) for (int i = 0; i < 3; i++) { - auto index = utility::conversions::print_string(i); + auto index = azure::storage::core::convert_to_string(i); auto blob = m_container.get_page_blob_reference(_XPLATSTR("pageblob") + index); blob.metadata()[_XPLATSTR("index")] = index; @@ -216,7 +258,7 @@ SUITE(Blob) for (int i = 0; i < 3; i++) { - auto index = utility::conversions::print_string(i); + auto index = azure::storage::core::convert_to_string(i); auto blob = m_container.get_append_blob_reference(_XPLATSTR("appendblob") + index); blob.metadata()[_XPLATSTR("index")] = index; @@ -225,7 +267,7 @@ SUITE(Blob) std::vector buffer; buffer.resize((i + 1) * 8 * 1024); - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); auto stream = concurrency::streams::container_stream>::open_istream(buffer); blob.append_block(stream, utility::string_t(), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); blobs[blob.name()] = blob; @@ -242,7 +284,7 @@ SUITE(Blob) auto index_str = blob->second.metadata().find(_XPLATSTR("index")); CHECK(index_str != blob->second.metadata().end()); - auto index = utility::conversions::scan_string(index_str->second); + auto index = utility::conversions::details::scan_string(index_str->second); switch (iter->type()) { @@ -276,6 +318,163 @@ SUITE(Blob) } } + TEST_FIXTURE(container_test_base, container_list_blobs_only_space_in_name) + { + m_container.create(azure::storage::blob_container_public_access_type::off, azure::storage::blob_request_options(), m_context); + check_container_no_stale_property(m_container); + std::map blobs; + + auto single_space_blob = m_container.get_block_blob_reference(_XPLATSTR(" ")); + + std::vector buffer; + buffer.resize(1024); + auto stream_1 = concurrency::streams::container_stream>::open_istream(buffer); + single_space_blob.upload_from_stream(stream_1, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + blobs[single_space_blob.name()] = single_space_blob; + + auto double_space_blob = m_container.get_block_blob_reference(_XPLATSTR(" ")); + + buffer.resize(1024); + auto stream_2 = concurrency::streams::container_stream>::open_istream(buffer); + double_space_blob.upload_from_stream(stream_2, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + blobs[double_space_blob.name()] = double_space_blob; + + auto listing1 = list_all_blobs(utility::string_t(), azure::storage::blob_listing_details::all, 0, azure::storage::blob_request_options()); + for (auto iter = listing1.begin(); iter != listing1.end(); ++iter) + { + auto blob = blobs.find(iter->name()); + CHECK(blob != blobs.end()); + + CHECK_UTF8_EQUAL(blob->second.uri().primary_uri().to_string(), iter->uri().primary_uri().to_string()); + CHECK_UTF8_EQUAL(blob->second.uri().secondary_uri().to_string(), iter->uri().secondary_uri().to_string()); + + blobs.erase(blob); + } + + CHECK_EQUAL(0U, blobs.size()); + } + + TEST_FIXTURE(container_test_base, container_list_premium_blobs) + { + //preparation + // Note that this case could fail due to not sufficient quota. Clean up the premium account could solve the issue. + + m_premium_container.create(azure::storage::blob_container_public_access_type::off, azure::storage::blob_request_options(), m_context); + m_blob_storage_container.create(azure::storage::blob_container_public_access_type::off, azure::storage::blob_request_options(), m_context); + std::map premium_page_blobs; + std::map block_blobs; + + for (int i = 0; i < 3; i++) + { + auto index = azure::storage::core::convert_to_string(i); + auto blob = m_blob_storage_container.get_block_blob_reference(_XPLATSTR("blockblob") + index); + blob.metadata()[_XPLATSTR("index")] = index; + + std::vector buffer; + buffer.resize(i * 16 * 1024); + auto stream = concurrency::streams::container_stream>::open_istream(buffer); + blob.upload_from_stream(stream, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + block_blobs[blob.name()] = blob; + } + + for (int i = 0; i < 3; i++) + { + auto index = azure::storage::core::convert_to_string(i); + auto blob = m_premium_container.get_page_blob_reference(_XPLATSTR("pageblob") + index); + blob.metadata()[_XPLATSTR("index")] = index; + + blob.create(i * 512, azure::storage::premium_blob_tier::p4, 0, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + premium_page_blobs[blob.name()] = blob; + } + + block_blobs[_XPLATSTR("blockblob0")].set_standard_blob_tier(azure::storage::standard_blob_tier::hot, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + block_blobs[_XPLATSTR("blockblob1")].set_standard_blob_tier(azure::storage::standard_blob_tier::cool, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + block_blobs[_XPLATSTR("blockblob2")].set_standard_blob_tier(azure::storage::standard_blob_tier::archive, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + premium_page_blobs[_XPLATSTR("pageblob0")].set_premium_blob_tier(azure::storage::premium_blob_tier::p4, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + premium_page_blobs[_XPLATSTR("pageblob1")].set_premium_blob_tier(azure::storage::premium_blob_tier::p6, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + premium_page_blobs[_XPLATSTR("pageblob2")].set_premium_blob_tier(azure::storage::premium_blob_tier::p10, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + + //test block blob. + auto listing1 = list_all_blobs(m_blob_storage_container, utility::string_t(), azure::storage::blob_listing_details::all, 0, azure::storage::blob_request_options()); + CHECK_EQUAL(3U, listing1.size()); + for (auto iter = listing1.begin(); iter != listing1.end(); ++iter) + { + auto blob = block_blobs.find(iter->name()); + CHECK(blob != block_blobs.end()); + + CHECK_UTF8_EQUAL(blob->second.uri().primary_uri().to_string(), iter->uri().primary_uri().to_string()); + CHECK_UTF8_EQUAL(blob->second.uri().secondary_uri().to_string(), iter->uri().secondary_uri().to_string()); + + auto index_str = blob->second.metadata().find(_XPLATSTR("index")); + CHECK(index_str != blob->second.metadata().end()); + auto index = utility::conversions::details::scan_string(index_str->second); + + CHECK_EQUAL(index * 16 * 1024, iter->properties().size()); + + switch (iter->properties().standard_blob_tier()) + { + case azure::storage::standard_blob_tier::hot: + CHECK(!iter->name().compare(_XPLATSTR("blockblob0"))); + CHECK(azure::storage::premium_blob_tier::unknown == iter->properties().premium_blob_tier()); + break; + case azure::storage::standard_blob_tier::cool: + CHECK(!iter->name().compare(_XPLATSTR("blockblob1"))); + CHECK(azure::storage::premium_blob_tier::unknown == iter->properties().premium_blob_tier()); + break; + case azure::storage::standard_blob_tier::archive: + CHECK(!iter->name().compare(_XPLATSTR("blockblob2"))); + CHECK(azure::storage::premium_blob_tier::unknown == iter->properties().premium_blob_tier()); + break; + default: + CHECK(false); + break; + } + block_blobs.erase(blob); + } + CHECK_EQUAL(0U, block_blobs.size()); + + //test page blob. + auto listing2 = list_all_blobs(m_premium_container, utility::string_t(), azure::storage::blob_listing_details::all, 0, azure::storage::blob_request_options()); + CHECK_EQUAL(3U, listing2.size()); + for (auto iter = listing2.begin(); iter != listing2.end(); ++iter) + { + auto blob = premium_page_blobs.find(iter->name()); + CHECK(blob != premium_page_blobs.end()); + + CHECK_UTF8_EQUAL(blob->second.uri().primary_uri().to_string(), iter->uri().primary_uri().to_string()); + CHECK_UTF8_EQUAL(blob->second.uri().secondary_uri().to_string(), iter->uri().secondary_uri().to_string()); + + auto index_str = blob->second.metadata().find(_XPLATSTR("index")); + CHECK(index_str != blob->second.metadata().end()); + auto index = utility::conversions::details::scan_string(index_str->second); + + CHECK_EQUAL(index * 512, iter->properties().size()); + + switch (iter->properties().premium_blob_tier()) + { + case azure::storage::premium_blob_tier::p4: + CHECK(!iter->name().compare(_XPLATSTR("pageblob0"))); + CHECK(azure::storage::standard_blob_tier::unknown == iter->properties().standard_blob_tier()); + break; + case azure::storage::premium_blob_tier::p6: + CHECK(!iter->name().compare(_XPLATSTR("pageblob1"))); + CHECK(azure::storage::standard_blob_tier::unknown == iter->properties().standard_blob_tier()); + break; + case azure::storage::premium_blob_tier::p10: + CHECK(!iter->name().compare(_XPLATSTR("pageblob2"))); + CHECK(azure::storage::standard_blob_tier::unknown == iter->properties().standard_blob_tier()); + break; + default: + CHECK(false); + break; + } + premium_page_blobs.erase(blob); + } + CHECK_EQUAL(0U, premium_page_blobs.size()); + m_premium_container.delete_container(); + m_blob_storage_container.delete_container(); + } + TEST_FIXTURE(blob_test_base, container_stored_policy) { auto stored_permissions = m_container.download_permissions(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); @@ -323,7 +522,7 @@ SUITE(Blob) /// Test the blob name with corner characters. TEST_FIXTURE(blob_test_base, corner_blob_name) { - // Initialize the chareset to generate random blob name. + // Initialize the char-set to generate random blob name. std::vector charset; utility::string_t characters = _XPLATSTR("`~!@#$%^&*()_+[{]}|;:\'\",<>?"); for (size_t i = 0; i < characters.size(); ++i) @@ -342,11 +541,283 @@ SUITE(Blob) auto listing = list_all_blobs(blob_name, azure::storage::blob_listing_details::all, 0, azure::storage::blob_request_options()); CHECK(listing.size() == 1); - // check the consistance of blob content. + // check the consistence of blob content. auto download_content = blob.download_text(); CHECK(content == download_content); blob.delete_blob(); } } + + //Test the timeout/cancellation token of cloud_blob_container + TEST_FIXTURE(container_test_base, container_create_delete_cancellation_timeout) + { + { + auto rand_container_name = get_random_string(20U); + auto container = m_client.get_container_reference(rand_container_name); + auto cancel_token_src = pplx::cancellation_token_source(); + // cancel the cancellation prior to the operation + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = container.create_async(azure::storage::blob_container_public_access_type::off, azure::storage::blob_request_options(), azure::storage::operation_context(), cancel_token_src.get_token()); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + CHECK(!container.exists(azure::storage::blob_request_options(), azure::storage::operation_context())); + } + + { + auto rand_container_name = get_random_string(20U); + auto container = m_client.get_container_reference(rand_container_name); + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + // cancel the cancellation after the operation + auto task_result = container.create_async(azure::storage::blob_container_public_access_type::off, azure::storage::blob_request_options(), azure::storage::operation_context(), cancel_token_src.get_token()); + task_result.get(); + cancel_token_src.cancel(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + CHECK(container.exists(azure::storage::blob_request_options(), azure::storage::operation_context())); + container.delete_container_if_exists(); + } + + { + auto rand_container_name = get_random_string(20U); + auto container = m_client.get_container_reference(rand_container_name); + // set the timeout to 1 millisecond, which should ALWAYS trigger the timeout exception. + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(1)); + + std::string ex_msg; + + try + { + auto task_result = container.create_async(azure::storage::blob_container_public_access_type::off, options, azure::storage::operation_context()); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + CHECK(!container.exists(azure::storage::blob_request_options(), azure::storage::operation_context())); + } + + { + auto rand_container_name = get_random_string(20U); + auto container = m_client.get_container_reference(rand_container_name); + // set the timeout to 100,000 millisecond, which should NEVER trigger the timeout exception. + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(100000)); + + std::string ex_msg; + + try + { + auto task_result = container.create_async(azure::storage::blob_container_public_access_type::off, options, azure::storage::operation_context()); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + CHECK(container.exists(azure::storage::blob_request_options(), azure::storage::operation_context())); + container.delete_container_if_exists(); + } + } + + TEST_FIXTURE(container_test_base, container_attributes_cancellation_timeout) + { + m_container.create(); + + { + auto cancel_token_src = pplx::cancellation_token_source(); + // cancel the cancellation prior to the operation + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = m_container.download_attributes_async(azure::storage::access_condition(), azure::storage::blob_request_options(), azure::storage::operation_context(), cancel_token_src.get_token()); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + // cancel the cancellation prior to the operation + cancel_token_src.cancel(); + + std::string ex_msg; + + auto permissions = azure::storage::blob_container_permissions(); + permissions.set_public_access(azure::storage::blob_container_public_access_type::container); + + try + { + auto task_result = m_container.upload_permissions_async(permissions, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + CHECK(azure::storage::blob_container_public_access_type::off == m_container.properties().public_access()); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(1)); + + std::string ex_msg; + + try + { + auto task_result = m_container.download_attributes_async(azure::storage::access_condition(), options, azure::storage::operation_context()); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(1)); + std::string ex_msg; + + auto permissions = azure::storage::blob_container_permissions(); + permissions.set_public_access(azure::storage::blob_container_public_access_type::container); + + try + { + auto task_result = m_container.upload_permissions_async(permissions, azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + CHECK(azure::storage::blob_container_public_access_type::off == m_container.properties().public_access()); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_container.download_attributes_async(azure::storage::access_condition(), azure::storage::blob_request_options(), azure::storage::operation_context(), cancel_token_src.get_token()); + task_result.get(); + // cancel the cancellation after the operation + cancel_token_src.cancel(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + } + + TEST_FIXTURE(container_test_base, container_list_blobs_cancellation_timeout) + { + m_container.create(); + + { + auto cancel_token_src = pplx::cancellation_token_source(); + // cancel the cancellation prior to the operation + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = m_container.list_blobs_segmented_async(_XPLATSTR(""), false, azure::storage::blob_listing_details::values::none, 100000, azure::storage::continuation_token(), azure::storage::blob_request_options(), azure::storage::operation_context(), cancel_token_src.get_token()); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(1)); + + std::string ex_msg; + + try + { + auto task_result = m_container.list_blobs_segmented_async(_XPLATSTR(""), false, azure::storage::blob_listing_details::values::none, 100000, azure::storage::continuation_token(), options, azure::storage::operation_context()); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_container.list_blobs_segmented_async(_XPLATSTR(""), false, azure::storage::blob_listing_details::values::none, 100000, azure::storage::continuation_token(), azure::storage::blob_request_options(), azure::storage::operation_context(), cancel_token_src.get_token()); + task_result.get(); + // cancel the cancellation after the operation + cancel_token_src.cancel(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + } } diff --git a/Microsoft.WindowsAzure.Storage/tests/cloud_blob_test.cpp b/Microsoft.WindowsAzure.Storage/tests/cloud_blob_test.cpp index e87f10e7..3b33d5f1 100644 --- a/Microsoft.WindowsAzure.Storage/tests/cloud_blob_test.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/cloud_blob_test.cpp @@ -22,6 +22,8 @@ #include "cpprest/producerconsumerstream.h" +#include "wascore/util.h" + #pragma region Fixture bool blob_test_base::wait_for_copy(azure::storage::cloud_blob& blob) @@ -43,7 +45,7 @@ azure::storage::operation_context blob_test_base::upload_and_download(azure::sto utility::string_t md5_header; context.set_sending_request([&md5_header] (web::http::http_request& request, azure::storage::operation_context) { - if (!request.headers().match(_XPLATSTR("x-ms-blob-content-md5"), md5_header)) + if (!request.headers().match(azure::storage::protocol::ms_header_blob_content_md5, md5_header)) { md5_header.clear(); } @@ -52,7 +54,7 @@ azure::storage::operation_context blob_test_base::upload_and_download(azure::sto std::vector buffer; buffer.resize(buffer_size); size_t target_blob_size = blob_size == 0 ? buffer_size - buffer_offset : blob_size; - auto md5 = fill_buffer_and_get_md5(buffer, buffer_offset, target_blob_size); + auto md5 = fill_buffer_and_get_md5(buffer, buffer_offset, std::min(target_blob_size, buffer.size() - buffer_offset)); concurrency::streams::istream stream; if (use_seekable_stream) @@ -111,10 +113,11 @@ azure::storage::operation_context blob_test_base::upload_and_download(azure::sto azure::storage::blob_request_options download_options(options); download_options.set_use_transactional_md5(false); + download_options.set_use_transactional_crc64(false); concurrency::streams::container_buffer> output_buffer; blob.download_to_stream(output_buffer.create_ostream(), azure::storage::access_condition(), download_options, context); - CHECK_ARRAY_EQUAL(buffer.data() + buffer_offset, output_buffer.collection().data(),(int) target_blob_size); + CHECK_ARRAY_EQUAL(buffer.data() + buffer_offset, output_buffer.collection().data(), int(target_blob_size)); context.set_sending_request(std::function()); return context; @@ -129,7 +132,7 @@ void blob_test_base::check_access(const utility::string_t& sas_token, uint8_t pe } azure::storage::cloud_blob_container container(m_container.uri(), credentials); - azure::storage::cloud_blob blob = container.get_blob_reference(original_blob.name()); + azure::storage::cloud_blob blob = container.get_blob_reference(original_blob.name(), original_blob.snapshot_time()); if (permissions & azure::storage::blob_shared_access_policy::permissions::list) { @@ -194,13 +197,16 @@ void blob_test_base::check_access(const utility::string_t& sas_token, uint8_t pe CHECK_THROW(blob.download_attributes(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context), azure::storage::storage_exception); } - if (permissions & azure::storage::blob_shared_access_policy::permissions::write) - { - blob.upload_metadata(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); - } - else + if (!blob.is_snapshot()) { - CHECK_THROW(blob.upload_metadata(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context), azure::storage::storage_exception); + if (permissions & azure::storage::blob_shared_access_policy::permissions::write) + { + blob.upload_metadata(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + } + else + { + CHECK_THROW(blob.upload_metadata(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context), azure::storage::storage_exception); + } } if (permissions & azure::storage::blob_shared_access_policy::permissions::del) @@ -288,42 +294,87 @@ SUITE(Blob) CHECK(blob.properties().content_type().empty()); CHECK(azure::storage::lease_status::unspecified == blob.properties().lease_status()); - auto same_blob = m_container.get_page_blob_reference(blob.name()); - same_blob.download_attributes(azure::storage::access_condition(), options, m_context); - CHECK_EQUAL(1024, same_blob.properties().size()); - CHECK_UTF8_EQUAL(blob.properties().etag(), same_blob.properties().etag()); - CHECK(blob.properties().last_modified() == same_blob.properties().last_modified()); - CHECK(same_blob.properties().cache_control().empty()); - CHECK(same_blob.properties().content_disposition().empty()); - CHECK(same_blob.properties().content_encoding().empty()); - CHECK(same_blob.properties().content_language().empty()); - CHECK(same_blob.properties().content_md5().empty()); - CHECK_UTF8_EQUAL(_XPLATSTR("application/octet-stream"), same_blob.properties().content_type()); - CHECK(azure::storage::lease_status::unlocked == same_blob.properties().lease_status()); - CHECK(blob.properties().server_encrypted() == same_blob.properties().server_encrypted()); + { + auto same_blob = m_container.get_page_blob_reference(blob.name()); + same_blob.download_attributes(azure::storage::access_condition(), options, m_context); + CHECK_EQUAL(1024, same_blob.properties().size()); + CHECK_UTF8_EQUAL(blob.properties().etag(), same_blob.properties().etag()); + CHECK(blob.properties().last_modified() == same_blob.properties().last_modified()); + CHECK(same_blob.properties().cache_control().empty()); + CHECK(same_blob.properties().content_disposition().empty()); + CHECK(same_blob.properties().content_encoding().empty()); + CHECK(same_blob.properties().content_language().empty()); + CHECK(same_blob.properties().content_md5().empty()); + CHECK_UTF8_EQUAL(_XPLATSTR("application/octet-stream"), same_blob.properties().content_type()); + CHECK(azure::storage::lease_status::unlocked == same_blob.properties().lease_status()); + + std::this_thread::sleep_for(std::chrono::seconds(1)); - std::this_thread::sleep_for(std::chrono::seconds(1)); + blob.properties().set_cache_control(_XPLATSTR("no-transform")); + blob.properties().set_content_disposition(_XPLATSTR("attachment")); + blob.properties().set_content_encoding(_XPLATSTR("gzip")); + blob.properties().set_content_language(_XPLATSTR("tr,en")); + blob.properties().set_content_md5(dummy_md5); + blob.properties().set_content_type(_XPLATSTR("text/html")); + blob.upload_properties(azure::storage::access_condition(), options, m_context); + CHECK(blob.properties().etag() != same_blob.properties().etag()); + CHECK(blob.properties().last_modified().to_interval() > same_blob.properties().last_modified().to_interval()); - blob.properties().set_cache_control(_XPLATSTR("no-transform")); - blob.properties().set_content_disposition(_XPLATSTR("attachment")); - blob.properties().set_content_encoding(_XPLATSTR("gzip")); - blob.properties().set_content_language(_XPLATSTR("tr,en")); - blob.properties().set_content_md5(dummy_md5); - blob.properties().set_content_type(_XPLATSTR("text/html")); - blob.upload_properties(azure::storage::access_condition(), options, m_context); - CHECK(blob.properties().etag() != same_blob.properties().etag()); - CHECK(blob.properties().last_modified().to_interval() > same_blob.properties().last_modified().to_interval()); + same_blob.download_attributes(azure::storage::access_condition(), options, m_context); + check_blob_properties_equal(blob.properties(), same_blob.properties(), true); + } - same_blob.download_attributes(azure::storage::access_condition(), options, m_context); - check_blob_properties_equal(blob.properties(), same_blob.properties()); + { + auto same_blob = m_container.get_page_blob_reference(blob.name()); + auto stream = concurrency::streams::container_stream>::open_ostream(); + same_blob.download_to_stream(stream, azure::storage::access_condition(), options, m_context); + check_blob_properties_equal(blob.properties(), same_blob.properties(), true); + } - auto still_same_blob = m_container.get_page_blob_reference(blob.name()); - auto stream = concurrency::streams::container_stream>::open_ostream(); - still_same_blob.download_to_stream(stream, azure::storage::access_condition(), options, m_context); - check_blob_properties_equal(blob.properties(), still_same_blob.properties()); + { + auto same_blob = m_container.get_page_blob_reference(blob.name()); + auto stream = concurrency::streams::container_stream>::open_ostream(); + azure::storage::blob_request_options local_options; + local_options.set_use_transactional_md5(true); + same_blob.download_range_to_stream(stream, 0, 128, azure::storage::access_condition(), local_options, azure::storage::operation_context()); + check_blob_properties_equal(blob.properties(), same_blob.properties(), true); + } + + { + auto listing = list_all_blobs(utility::string_t(), azure::storage::blob_listing_details::all, 0, options); + check_blob_properties_equal(blob.properties(), listing.front().properties(), true); + } - auto listing = list_all_blobs(utility::string_t(), azure::storage::blob_listing_details::all, 0, options); - check_blob_properties_equal(blob.properties(), listing.front().properties()); + { + blob.properties().set_content_md5(_XPLATSTR("")); + blob.upload_properties(azure::storage::access_condition(), options, m_context); + + auto same_blob = m_container.get_page_blob_reference(blob.name()); + auto stream = concurrency::streams::container_stream>::open_ostream(); + azure::storage::blob_request_options local_options; + local_options.set_use_transactional_md5(true); + same_blob.download_range_to_stream(stream, 0, 128, azure::storage::access_condition(), local_options, azure::storage::operation_context()); + check_blob_properties_equal(blob.properties(), same_blob.properties(), true); + } + } + + TEST_FIXTURE(blob_test_base, download_account_properties_blob) + { + auto blob = m_container.get_blob_reference(_XPLATSTR("tmpblob")); + auto properties = m_container.get_blob_reference(_XPLATSTR("tmpblob")).download_account_properties(); + CHECK(!properties.sku_name().empty()); + CHECK(!properties.account_kind().empty()); + + azure::storage::blob_shared_access_policy access_policy; + access_policy.set_expiry(utility::datetime::utc_now() + utility::datetime::from_minutes(30)); + access_policy.set_permissions(azure::storage::account_shared_access_policy::read); + + azure::storage::storage_credentials sas_credentials(blob.get_shared_access_signature(access_policy)); + azure::storage::cloud_blob sas_blob(blob.uri(), sas_credentials); + + properties = sas_blob.download_account_properties(); + CHECK(!properties.sku_name().empty()); + CHECK(!properties.account_kind().empty()); } TEST_FIXTURE(blob_test_base, blob_type) @@ -392,6 +443,49 @@ SUITE(Blob) CHECK(blob.metadata().empty()); } + TEST_FIXTURE(blob_test_base, blob_whitespace_metadata) + { + // Create with 3 pairs that has space in value. + auto blob = m_container.get_block_blob_reference(_XPLATSTR("blockblob")); + blob.metadata()[_XPLATSTR("key1")] = _XPLATSTR(" value1 "); + blob.metadata()[_XPLATSTR("key2")] = _XPLATSTR(" value2"); + blob.metadata()[_XPLATSTR("key3")] = _XPLATSTR("value3 "); + blob.upload_text(utility::string_t()); + + auto same_blob = m_container.get_blob_reference(blob.name()); + CHECK(same_blob.metadata().empty()); + same_blob.download_attributes(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + CHECK_EQUAL(3U, same_blob.metadata().size()); + CHECK_UTF8_EQUAL(_XPLATSTR("value1"), same_blob.metadata()[_XPLATSTR("key1")]); + CHECK_UTF8_EQUAL(_XPLATSTR("value2"), same_blob.metadata()[_XPLATSTR("key2")]); + CHECK_UTF8_EQUAL(_XPLATSTR("value3"), same_blob.metadata()[_XPLATSTR("key3")]); + + // Add 1 pair with only spaces in name + auto same_blob1 = m_container.get_blob_reference(blob.name()); + same_blob1.metadata()[_XPLATSTR(" ")] = _XPLATSTR("value"); + CHECK_THROW(same_blob1.upload_metadata(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context), std::invalid_argument); + + // Add 1 pair with trailing spaces in name + auto same_blob2 = m_container.get_blob_reference(blob.name()); + same_blob2.metadata()[_XPLATSTR("key1 ")] = _XPLATSTR("value"); + CHECK_THROW(same_blob2.upload_metadata(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context), std::invalid_argument); + + // Add 1 pair with beginning spaces in name + auto same_blob3 = m_container.get_blob_reference(blob.name()); + same_blob3.metadata()[_XPLATSTR(" key")] = _XPLATSTR("value"); + CHECK_THROW(same_blob3.upload_metadata(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context), std::invalid_argument); + + // Add 1 pair with spaces in name + auto same_blob4 = m_container.get_blob_reference(blob.name()); + same_blob4.metadata()[_XPLATSTR("key key")] = _XPLATSTR("value"); + CHECK_THROW(same_blob4.upload_metadata(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context), std::invalid_argument); + + // Add 1 pair with empty name + auto same_blob5 = m_container.get_blob_reference(blob.name()); + same_blob5.metadata()[_XPLATSTR("")] = _XPLATSTR("value"); + CHECK_THROW(same_blob5.upload_metadata(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context), std::invalid_argument); + } + TEST_FIXTURE(blob_test_base, blob_invalid_sas_and_snapshot) { azure::storage::blob_shared_access_policy policy; @@ -438,7 +532,7 @@ SUITE(Blob) policy.set_expiry(utility::datetime::utc_now() + utility::datetime::from_minutes(30)); auto sas_token = m_container.get_shared_access_signature(policy); - auto blob = m_container.get_block_blob_reference(_XPLATSTR("blob") + utility::conversions::print_string((int)i)); + auto blob = m_container.get_block_blob_reference(_XPLATSTR("blob") + azure::storage::core::convert_to_string((int)i)); blob.properties().set_cache_control(_XPLATSTR("no-transform")); blob.properties().set_content_disposition(_XPLATSTR("attachment")); blob.properties().set_content_encoding(_XPLATSTR("gzip")); @@ -461,7 +555,7 @@ SUITE(Blob) policy.set_start(utility::datetime::utc_now() - utility::datetime::from_minutes(5)); policy.set_expiry(utility::datetime::utc_now() + utility::datetime::from_minutes(30)); - auto blob = m_container.get_block_blob_reference(_XPLATSTR("blob") + utility::conversions::print_string((int)i)); + auto blob = m_container.get_block_blob_reference(_XPLATSTR("blob") + azure::storage::core::convert_to_string((int)i)); blob.properties().set_cache_control(_XPLATSTR("no-transform")); blob.properties().set_content_disposition(_XPLATSTR("attachment")); blob.properties().set_content_encoding(_XPLATSTR("gzip")); @@ -542,6 +636,14 @@ SUITE(Blob) CHECK_THROW(snapshot1.upload_metadata(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context), std::logic_error); CHECK_THROW(snapshot1.create_snapshot(azure::storage::cloud_metadata(), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context), std::logic_error); + azure::storage::blob_shared_access_policy policy; + auto permissions = azure::storage::blob_shared_access_policy::read; + policy.set_permissions(permissions); + policy.set_start(utility::datetime::utc_now() - utility::datetime::from_minutes(5)); + policy.set_expiry(utility::datetime::utc_now() + utility::datetime::from_minutes(30)); + auto sas_token = snapshot1.get_shared_access_signature(policy); + check_access(sas_token, permissions, azure::storage::cloud_blob_shared_access_headers(), snapshot1); + std::this_thread::sleep_for(std::chrono::seconds(1)); auto snapshot2 = m_blob.create_snapshot(azure::storage::cloud_metadata(), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); @@ -549,19 +651,19 @@ SUITE(Blob) snapshot1.download_attributes(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); m_blob.download_attributes(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); - check_blob_properties_equal(m_blob.properties(), snapshot1.properties()); + check_blob_properties_equal(m_blob.properties(), snapshot1.properties(), true); web::http::uri snapshot1_primary_uri(m_blob.uri().primary_uri().to_string() + _XPLATSTR("?snapshot=") + snapshot1.snapshot_time()); web::http::uri snapshot1_secondary_uri(m_blob.uri().secondary_uri().to_string() + _XPLATSTR("?snapshot=") + snapshot1.snapshot_time()); azure::storage::cloud_blob snapshot1_clone(azure::storage::storage_uri(snapshot1_primary_uri, snapshot1_secondary_uri), m_blob.service_client().credentials()); CHECK(snapshot1.snapshot_time() == snapshot1_clone.snapshot_time()); snapshot1_clone.download_attributes(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); - check_blob_properties_equal(snapshot1.properties(), snapshot1_clone.properties()); + check_blob_properties_equal(snapshot1.properties(), snapshot1_clone.properties(), true); azure::storage::cloud_blob snapshot1_clone2(m_blob.uri(), snapshot1.snapshot_time(), m_blob.service_client().credentials()); CHECK(snapshot1.snapshot_time() == snapshot1_clone2.snapshot_time()); snapshot1_clone2.download_attributes(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); - check_blob_properties_equal(snapshot1.properties(), snapshot1_clone2.properties()); + check_blob_properties_equal(snapshot1.properties(), snapshot1_clone2.properties(), true); m_blob.upload_text(_XPLATSTR("2"), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); CHECK_UTF8_EQUAL(_XPLATSTR("1"), azure::storage::cloud_block_blob(snapshot1).download_text(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context)); @@ -629,6 +731,21 @@ SUITE(Blob) CHECK_THROW(copy2.start_copy(blob, azure::storage::access_condition::generate_if_match_condition(blob.properties().etag()), azure::storage::access_condition::generate_if_match_condition(_XPLATSTR("\"0xFFFFFFFFFFFFFFF\"")), azure::storage::blob_request_options(), m_context), azure::storage::storage_exception); } + TEST_FIXTURE(blob_test_base, blob_copy_with_premium_access_tier) + { + m_premium_container.create(azure::storage::blob_container_public_access_type::off, azure::storage::blob_request_options(), m_context); + auto blob = m_premium_container.get_page_blob_reference(_XPLATSTR("source")); + blob.create(1024); + + auto dest = m_premium_container.get_page_blob_reference(_XPLATSTR("dest")); + azure::storage::blob_request_options options; + + dest.start_copy(defiddler(blob.uri().primary_uri()), azure::storage::premium_blob_tier::p30, azure::storage::access_condition(), azure::storage::access_condition(), options, m_context); + CHECK(azure::storage::premium_blob_tier::p30 == dest.properties().premium_blob_tier()); + dest.download_attributes(); + CHECK(azure::storage::premium_blob_tier::p30 == dest.properties().premium_blob_tier()); + } + /// /// Test blob copy from a cloud_blob object using sas token. /// @@ -680,7 +797,7 @@ SUITE(Blob) for (size_t i = 0; i < 2; ++i) { auto file_name = this->get_random_string(); - auto share = test_config::instance().account().create_cloud_file_client().get_share_reference(_XPLATSTR("testshare")); + auto share = test_config::instance().account().create_cloud_file_client().get_share_reference(_XPLATSTR("testshare") + get_random_string()); share.create_if_not_exists(); auto source = share.get_root_directory_reference().get_file_reference(file_name); source.upload_text(_XPLATSTR("1"), azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); @@ -698,9 +815,47 @@ SUITE(Blob) CHECK(wait_for_copy(dest)); CHECK_THROW(dest.abort_copy(copy_id, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context), azure::storage::storage_exception); CHECK_EQUAL(web::http::status_codes::Conflict, m_context.request_results().back().http_status_code()); + share.delete_share(); } } + /// + /// Test blob copy with metadata. + /// + TEST_FIXTURE(blob_test_base, blob_copy_metadata) + { + auto src_blob = m_container.get_block_blob_reference(_XPLATSTR("src_blob")); + azure::storage::cloud_metadata metadata1; + metadata1[_XPLATSTR("m_key1")] = _XPLATSTR("m_val1"); + metadata1[_XPLATSTR("m_key2")] = _XPLATSTR("m_val1"); + metadata1[_XPLATSTR("m_key3")] = _XPLATSTR("m_val2"); + metadata1[_XPLATSTR("m_key4")] = _XPLATSTR("m_val3"); + azure::storage::cloud_metadata metadata2; + metadata2[_XPLATSTR("mm_key1")] = _XPLATSTR("mm_val1"); + metadata2[_XPLATSTR("mm_key2")] = _XPLATSTR("mm_val2"); + src_blob.metadata() = metadata1; + src_blob.upload_text(_XPLATSTR("Hello world!")); + auto dest_blob = m_container.get_block_blob_reference(_XPLATSTR("dest_blob")); + + // dest doesn't exist + dest_blob.start_copy(src_blob); + CHECK(wait_for_copy(dest_blob)); + dest_blob.download_attributes(); + CHECK(metadata1 == dest_blob.metadata()); + + // dest exists with metadata, now we want to override metadata. + dest_blob.start_copy(src_blob, metadata2, azure::storage::access_condition(), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + CHECK(wait_for_copy(dest_blob)); + dest_blob.download_attributes(); + CHECK(metadata2 == dest_blob.metadata()); + + // dest exists with metadata, keep src's metadata. + dest_blob.start_copy(src_blob, azure::storage::cloud_metadata(), azure::storage::access_condition(), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + CHECK(wait_for_copy(dest_blob)); + dest_blob.download_attributes(); + CHECK(metadata1 == dest_blob.metadata()); + } + /// /// Test parallel download /// @@ -715,6 +870,7 @@ SUITE(Blob) option.set_parallelism_factor(2); std::vector data; data.resize(target_length); + fill_buffer(data); concurrency::streams::container_buffer> upload_buffer(data); blob.upload_from_stream(upload_buffer.create_istream(), azure::storage::access_condition(), option, m_context); @@ -738,6 +894,7 @@ SUITE(Blob) option.set_parallelism_factor(2); std::vector data; data.resize(target_length); + fill_buffer(data); concurrency::streams::container_buffer> upload_buffer(data); blob.upload_from_stream(upload_buffer.create_istream(), azure::storage::access_condition(), option, m_context); @@ -753,6 +910,102 @@ SUITE(Blob) } } + /// + /// Test parallel download wit offset + /// + TEST_FIXTURE(blob_test_base, parallel_download_with_offset) + { + // blob with size larger than 32MB. + // With offset not zero. + { + auto blob_name = get_random_string(20); + auto blob = m_container.get_block_blob_reference(blob_name); + size_t target_length = 100 * 1024 * 1024; + azure::storage::blob_request_options option; + option.set_parallelism_factor(2); + std::vector data; + data.resize(target_length); + fill_buffer(data); + concurrency::streams::container_buffer> upload_buffer(data); + blob.upload_from_stream(upload_buffer.create_istream(), azure::storage::access_condition(), option, m_context); + + // download target blob in parallel. + azure::storage::operation_context context; + concurrency::streams::container_buffer> download_buffer; + + utility::size64_t actual_offset = get_random_int32() % 255 + 1; + utility::size64_t actual_length = target_length - actual_offset; + blob.download_range_to_stream(download_buffer.create_ostream(), actual_offset, actual_length, azure::storage::access_condition(), option, context); + + check_parallelism(context, 2); + CHECK(blob.properties().size() == target_length); + CHECK(download_buffer.collection().size() == actual_length); + CHECK(std::equal(data.begin() + actual_offset, data.end(), download_buffer.collection().begin())); + } + + // blob with size larger than 32MB. + // With offset not zero, length = max. + { + auto blob_name = get_random_string(20); + auto blob = m_container.get_block_blob_reference(blob_name); + size_t target_length = 100 * 1024 * 1024; + azure::storage::blob_request_options option; + option.set_parallelism_factor(2); + std::vector data; + data.resize(target_length); + fill_buffer(data); + concurrency::streams::container_buffer> upload_buffer(data); + blob.upload_from_stream(upload_buffer.create_istream(), azure::storage::access_condition(), option, m_context); + + // download target blob in parallel. + azure::storage::operation_context context; + concurrency::streams::container_buffer> download_buffer; + + utility::size64_t actual_offset = get_random_int32() % 255 + 1; + utility::size64_t actual_length = target_length - actual_offset; + blob.download_range_to_stream(download_buffer.create_ostream(), actual_offset, std::numeric_limits::max(), azure::storage::access_condition(), option, context); + + check_parallelism(context, 2); + CHECK(blob.properties().size() == target_length); + CHECK(download_buffer.collection().size() == actual_length); + CHECK(std::equal(data.begin() + actual_offset, data.end(), download_buffer.collection().begin())); + } + } + + /// + /// Test parallel download wit length too large + /// + TEST_FIXTURE(blob_test_base, parallel_download_with_length_too_large) + { + // blob with size larger than 32MB. + // With offset not zero. + { + auto blob_name = get_random_string(20); + auto blob = m_container.get_block_blob_reference(blob_name); + size_t target_length = 100 * 1024 * 1024; + azure::storage::blob_request_options option; + option.set_parallelism_factor(10); + std::vector data; + data.resize(target_length); + fill_buffer(data); + concurrency::streams::container_buffer> upload_buffer(data); + blob.upload_from_stream(upload_buffer.create_istream(), azure::storage::access_condition(), option, m_context); + + // download target blob in parallel. + azure::storage::operation_context context; + concurrency::streams::container_buffer> download_buffer; + + utility::size64_t actual_offset = get_random_int32() % 255 + 1; + utility::size64_t actual_length = target_length - actual_offset; + blob.download_range_to_stream(download_buffer.create_ostream(), actual_offset, actual_length * 2, azure::storage::access_condition(), option, context); + + check_parallelism(context, 10); + CHECK(blob.properties().size() == target_length); + CHECK(download_buffer.collection().size() == actual_length); + CHECK(std::equal(data.begin() + actual_offset, data.end(), download_buffer.collection().begin())); + } + } + TEST_FIXTURE(blob_test_base, parallel_download_with_md5) { // transactional md5 enabled. @@ -766,6 +1019,7 @@ SUITE(Blob) option.set_use_transactional_md5(true); std::vector data; data.resize(target_length); + fill_buffer(data); concurrency::streams::container_buffer> upload_buffer(data); blob.upload_from_stream(upload_buffer.create_istream(), azure::storage::access_condition(), option, m_context); @@ -790,6 +1044,7 @@ SUITE(Blob) option.set_use_transactional_md5(true); std::vector data; data.resize(target_length); + fill_buffer(data); concurrency::streams::container_buffer> upload_buffer(data); blob.upload_from_stream(upload_buffer.create_istream(), azure::storage::access_condition(), option, m_context); @@ -826,4 +1081,138 @@ SUITE(Blob) check_parallelism(context, 1); CHECK(blob.properties().size() == target_length); } + + TEST_FIXTURE(blob_test_base, range_not_satisfiable_exception) + { + auto blob_name = get_random_string(20); + auto blob = m_container.get_block_blob_reference(blob_name); + blob.upload_text(utility::string_t()); + + auto blob2 = m_container.get_block_blob_reference(blob_name + _XPLATSTR("2")); + blob2.upload_text(_XPLATSTR("abcd")); + + azure::storage::blob_request_options options1; + options1.set_parallelism_factor(1); + options1.set_use_transactional_crc64(false); + + azure::storage::blob_request_options options2; + options2.set_parallelism_factor(2); + options2.set_use_transactional_crc64(false); + + azure::storage::blob_request_options options3; + options3.set_parallelism_factor(1); + options3.set_use_transactional_crc64(true); + + for (const auto& option : { options1, options2, options3 }) { + concurrency::streams::container_buffer> download_buffer; + + // download whole blob, no exception + blob.download_to_stream(download_buffer.create_ostream(), azure::storage::access_condition(), option, azure::storage::operation_context()); + + // download range, should throw + CHECK_THROW(blob.download_range_to_stream(download_buffer.create_ostream(), 0, 100, azure::storage::access_condition(), option, azure::storage::operation_context()), azure::storage::storage_exception); + + // download range(max, ...), no exception + blob.download_range_to_stream(download_buffer.create_ostream(), std::numeric_limits::max(), 0, azure::storage::access_condition(), option, azure::storage::operation_context()); + + // download range(3, very large), no exception + blob2.download_range_to_stream(download_buffer.create_ostream(), 3, 100, azure::storage::access_condition(), option, azure::storage::operation_context()); + + // download range(4, ...), should throw + CHECK_THROW(blob2.download_range_to_stream(download_buffer.create_ostream(), 4, 100, azure::storage::access_condition(), option, azure::storage::operation_context()), azure::storage::storage_exception); + } + } + + TEST_FIXTURE(blob_test_base, read_blob_with_invalid_if_none_match) + { + auto blob_name = get_random_string(20); + auto blob = m_container.get_block_blob_reference(blob_name); + blob.upload_text(_XPLATSTR("test")); + + azure::storage::operation_context context; + azure::storage::access_condition condition; + condition.set_if_none_match_etag(_XPLATSTR("*")); + CHECK_THROW(blob.download_text(condition, azure::storage::blob_request_options(), context), azure::storage::storage_exception); + CHECK_EQUAL(web::http::status_codes::BadRequest, context.request_results().back().http_status_code()); + } + + TEST_FIXTURE(blob_test_base, blob_concurrent_download_cancellation_timeout) + { + utility::size64_t length = 260 * 1024 * 1024; + std::vector buffer; + buffer.resize(length); + fill_buffer_and_get_md5(buffer); + auto blob_name = get_random_string(20); + auto blob = m_container.get_block_blob_reference(blob_name); + blob.upload_from_stream(concurrency::streams::bytestream::open_istream(buffer), length); + + { + concurrency::streams::container_buffer> output_buffer; + auto cancel_token_src = pplx::cancellation_token_source(); + auto options = azure::storage::blob_request_options(); + options.set_parallelism_factor(4); + options.set_maximum_execution_time(std::chrono::milliseconds(10000)); + // cancel the cancellation prior to the operation + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = blob.download_to_stream_async(output_buffer, azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + concurrency::streams::container_buffer> output_buffer; + auto cancel_token_src = pplx::cancellation_token_source(); + auto options = azure::storage::blob_request_options(); + options.set_parallelism_factor(4); + options.set_maximum_execution_time(std::chrono::milliseconds(10000)); + + std::string ex_msg; + + try + { + auto task_result = blob.download_to_stream_async(output_buffer, azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(300)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + concurrency::streams::container_buffer> output_buffer; + auto options = azure::storage::blob_request_options(); + options.set_parallelism_factor(4); + options.set_maximum_execution_time(std::chrono::milliseconds(1000)); + + std::string ex_msg; + + try + { + auto task_result = blob.download_to_stream_async(output_buffer, azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + } } diff --git a/Microsoft.WindowsAzure.Storage/tests/cloud_block_blob_test.cpp b/Microsoft.WindowsAzure.Storage/tests/cloud_block_blob_test.cpp index cccd88fe..d44b0e63 100644 --- a/Microsoft.WindowsAzure.Storage/tests/cloud_block_blob_test.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/cloud_block_blob_test.cpp @@ -20,7 +20,10 @@ #include "check_macros.h" #include "cpprest/producerconsumerstream.h" +#include "cpprest/rawptrstream.h" +#include "was/crc64.h" #include "wascore/constants.h" +#include "wascore/util.h" #pragma region Fixture @@ -68,6 +71,68 @@ void block_blob_test_base::check_block_list_equal(const std::vector +class currupted_ostreambuf : public Concurrency::streams::details::basic_rawptr_buffer<_CharType> +{ +public: + currupted_ostreambuf(bool keep_writable, int recover_on_nretries) + : Concurrency::streams::details::basic_rawptr_buffer<_CharType>(), m_keepwritable(keep_writable), m_recover_on_nretries(recover_on_nretries) + { } + + pplx::task _putn(const _CharType* ptr, size_t count) override + { + UNREFERENCED_PARAMETER(ptr); + try + { + ++m_call_count; + if (m_call_count < m_recover_on_nretries + 1) + { + throw azure::storage::storage_exception(INTENDED_ERR_MSG); + } + return pplx::task_from_result(count); + } + catch (...) + { + return pplx::task_from_exception(std::current_exception()); + } + } + + bool can_write() const override + { + return m_keepwritable + ? true + : Concurrency::streams::details::basic_rawptr_buffer<_CharType>::can_write(); + } + + int call_count() const + { + return m_call_count; + } + +private: + int m_call_count = 0; + int m_recover_on_nretries = 0; + bool m_keepwritable = false; +}; + +template +class currupted_stream +{ +public: + typedef _CharType char_type; + typedef currupted_ostreambuf<_CharType> buffer_type; + + static concurrency::streams::basic_ostream open_ostream(bool keep_writable, int recover_on_nretries) + { + return concurrency::streams::basic_ostream(concurrency::streams::streambuf(std::make_shared(keep_writable, recover_on_nretries))); + } +}; + #pragma endregion SUITE(Blob) @@ -81,20 +146,28 @@ SUITE(Blob) std::vector committed_blocks; utility::string_t md5_header; - m_context.set_sending_request([&md5_header] (web::http::http_request& request, azure::storage::operation_context) + utility::string_t crc64_header; + m_context.set_sending_request([&md5_header, &crc64_header] (web::http::http_request& request, azure::storage::operation_context) { if (!request.headers().match(web::http::header_names::content_md5, md5_header)) { md5_header.clear(); } + if (!request.headers().match(azure::storage::protocol::ms_header_content_crc64, crc64_header)) + { + crc64_header.clear(); + } }); + uint16_t block_id_counter = 0; + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(false); for (uint16_t i = 0; i < 3; ++i) { - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); auto stream = concurrency::streams::bytestream::open_istream(buffer); - auto block_id = get_block_id(i); + auto block_id = get_block_id(block_id_counter++); uncommitted_blocks.push_back(azure::storage::block_list_item(block_id)); m_blob.upload_block(block_id, stream, utility::string_t(), azure::storage::access_condition(), options, m_context); CHECK_UTF8_EQUAL(utility::string_t(), md5_header); @@ -106,11 +179,12 @@ SUITE(Blob) uncommitted_blocks.clear(); options.set_use_transactional_md5(false); - for (uint16_t i = 3; i < 6; ++i) + options.set_use_transactional_crc64(false); + for (uint16_t i = 0; i < 3; ++i) { auto md5 = fill_buffer_and_get_md5(buffer); auto stream = concurrency::streams::bytestream::open_istream(buffer); - auto block_id = get_block_id(i); + auto block_id = get_block_id(block_id_counter++); uncommitted_blocks.push_back(azure::storage::block_list_item(block_id)); m_blob.upload_block(block_id, stream, md5, azure::storage::access_condition(), options, m_context); CHECK_UTF8_EQUAL(md5, md5_header); @@ -121,27 +195,50 @@ SUITE(Blob) m_blob.upload_block_list(committed_blocks, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); uncommitted_blocks.clear(); + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(false); + for (uint16_t i = 0; i < 3; ++i) + { + auto crc64 = fill_buffer_and_get_crc64(buffer); + uint64_t crc64_val = azure::storage::crc64(buffer.data(), buffer.size()); + auto stream = concurrency::streams::bytestream::open_istream(buffer); + auto block_id = get_block_id(block_id_counter++); + uncommitted_blocks.push_back(azure::storage::block_list_item(block_id)); + m_blob.upload_block(block_id, stream, crc64_val, azure::storage::access_condition(), options, m_context); + CHECK_UTF8_EQUAL(crc64, crc64_header); + } + + check_block_list_equal(committed_blocks, uncommitted_blocks); + std::copy(uncommitted_blocks.begin(), uncommitted_blocks.end(), std::back_inserter(committed_blocks)); + m_blob.upload_block_list(committed_blocks, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + uncommitted_blocks.clear(); + options.set_use_transactional_md5(true); - for (uint16_t i = 6; i < 9; ++i) + options.set_use_transactional_crc64(false); + for (uint16_t i = 0; i < 3; ++i) { auto md5 = fill_buffer_and_get_md5(buffer); auto stream = concurrency::streams::bytestream::open_istream(buffer); - auto block_id = get_block_id(i); + auto block_id = get_block_id(block_id_counter++); uncommitted_blocks.push_back(azure::storage::block_list_item(block_id)); m_blob.upload_block(block_id, stream, utility::string_t(), azure::storage::access_condition(), options, m_context); CHECK_UTF8_EQUAL(md5, md5_header); } - options.set_use_transactional_md5(false); + check_block_list_equal(committed_blocks, uncommitted_blocks); + std::copy(uncommitted_blocks.begin(), uncommitted_blocks.end(), std::back_inserter(committed_blocks)); + m_blob.upload_block_list(committed_blocks, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + uncommitted_blocks.clear(); + + options.set_use_transactional_md5(true); + options.set_use_transactional_crc64(false); + for (uint16_t i = 0; i < 3; ++i) { - // upload a block of max_block_size - std::vector big_buffer; - big_buffer.resize(azure::storage::protocol::max_block_size); - auto md5 = fill_buffer_and_get_md5(big_buffer); - auto stream = concurrency::streams::bytestream::open_istream(big_buffer); - auto block_id = get_block_id(9); + auto md5 = fill_buffer_and_get_md5(buffer); + auto stream = concurrency::streams::bytestream::open_istream(buffer); + auto block_id = get_block_id(block_id_counter++); uncommitted_blocks.push_back(azure::storage::block_list_item(block_id)); - m_blob.upload_block(block_id, stream, md5, azure::storage::access_condition(), options, m_context); + m_blob.upload_block(block_id, stream, azure::storage::checksum_none, azure::storage::access_condition(), options, m_context); CHECK_UTF8_EQUAL(md5, md5_header); } @@ -150,37 +247,38 @@ SUITE(Blob) m_blob.upload_block_list(committed_blocks, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); uncommitted_blocks.clear(); + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(true); + for (uint16_t i = 0; i < 3; ++i) { - options.set_use_transactional_md5(true); - fill_buffer_and_get_md5(buffer); + auto crc64 = fill_buffer_and_get_crc64(buffer); auto stream = concurrency::streams::bytestream::open_istream(buffer); - CHECK_THROW(m_blob.upload_block(get_block_id(0), stream, dummy_md5, azure::storage::access_condition(), options, m_context), azure::storage::storage_exception); - CHECK_UTF8_EQUAL(dummy_md5, md5_header); + auto block_id = get_block_id(block_id_counter++); + uncommitted_blocks.push_back(azure::storage::block_list_item(block_id)); + m_blob.upload_block(block_id, stream, azure::storage::checksum_none, azure::storage::access_condition(), options, m_context); + CHECK_UTF8_EQUAL(crc64, crc64_header); } - options.set_use_transactional_md5(false); + check_block_list_equal(committed_blocks, uncommitted_blocks); + std::copy(uncommitted_blocks.begin(), uncommitted_blocks.end(), std::back_inserter(committed_blocks)); + m_blob.upload_block_list(committed_blocks, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + uncommitted_blocks.clear(); - // trying upload blocks bigger than max_block_size { - buffer.resize(azure::storage::protocol::max_block_size + 1); - fill_buffer_and_get_md5(buffer); - - // seekable stream + options.set_use_transactional_md5(true); + options.set_use_transactional_crc64(false); + fill_buffer(buffer); auto stream = concurrency::streams::bytestream::open_istream(buffer); - CHECK_THROW(m_blob.upload_block(get_block_id(0), stream, utility::string_t(), azure::storage::access_condition(), options, m_context), std::invalid_argument); + CHECK_THROW(m_blob.upload_block(get_block_id(0), stream, dummy_md5, azure::storage::access_condition(), options, m_context), azure::storage::storage_exception); + CHECK_UTF8_EQUAL(dummy_md5, md5_header); } - { - buffer.resize(azure::storage::protocol::max_block_size * 2); - fill_buffer_and_get_md5(buffer); - - concurrency::streams::producer_consumer_buffer pcbuffer; - pcbuffer.putn_nocopy(buffer.data(), azure::storage::protocol::max_block_size * 2); - pcbuffer.close(std::ios_base::out); - - // non-seekable stream - auto stream = pcbuffer.create_istream(); - CHECK_THROW(m_blob.upload_block(get_block_id(0), stream, utility::string_t(), azure::storage::access_condition(), options, m_context), std::invalid_argument); + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(true); + fill_buffer(buffer); + auto stream = concurrency::streams::bytestream::open_istream(buffer); + CHECK_THROW(m_blob.upload_block(get_block_id(0), stream, dummy_crc64_val, azure::storage::access_condition(), options, m_context), azure::storage::storage_exception); + CHECK_UTF8_EQUAL(dummy_crc64, crc64_header); } check_block_list_equal(committed_blocks, uncommitted_blocks); @@ -192,12 +290,21 @@ SUITE(Blob) { const size_t size = 6 * 1024 * 1024; azure::storage::blob_request_options options; - options.set_store_blob_content_md5(false); + + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(false); + check_parallelism(upload_and_download(m_blob, size, 0, 0, true, options, 1, false), 1); + m_blob.delete_blob(); + m_blob.properties().set_content_md5(utility::string_t()); + + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(true); check_parallelism(upload_and_download(m_blob, size, 0, 0, true, options, 1, false), 1); m_blob.delete_blob(); m_blob.properties().set_content_md5(utility::string_t()); + options.set_use_transactional_crc64(false); options.set_use_transactional_md5(true); options.set_store_blob_content_md5(true); check_parallelism(upload_and_download(m_blob, size, 0, 0, true, options, 1, true), 1); @@ -291,9 +398,22 @@ SUITE(Blob) { const size_t size = 6 * 1024 * 1024; azure::storage::blob_request_options options; - options.set_use_transactional_md5(true); - options.set_store_blob_content_md5(false); + + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(false); + check_parallelism(upload_and_download(m_blob, size, 0, 0, false, options, 3, false), 1); + m_blob.delete_blob(); + m_blob.properties().set_content_md5(utility::string_t()); + + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(true); + check_parallelism(upload_and_download(m_blob, size, 0, 0, false, options, 3, false), 1); + m_blob.delete_blob(); + m_blob.properties().set_content_md5(utility::string_t()); + + options.set_use_transactional_md5(true); + options.set_use_transactional_crc64(false); check_parallelism(upload_and_download(m_blob, size, 0, 0, false, options, 3, false), 1); m_blob.delete_blob(); m_blob.properties().set_content_md5(utility::string_t()); @@ -339,19 +459,28 @@ SUITE(Blob) { const size_t buffer_size = 6 * 1024 * 1024; const size_t blob_size = 4 * 1024 * 1024; - azure::storage::blob_request_options options; const size_t buffer_offsets[2] = { 0, 1024 }; for (auto buffer_offset : buffer_offsets) { + azure::storage::blob_request_options options; options.set_stream_write_size_in_bytes(blob_size); - options.set_use_transactional_md5(false); options.set_store_blob_content_md5(false); options.set_parallelism_factor(1); + + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(false); + check_parallelism(upload_and_download(m_blob, buffer_size, buffer_offset, blob_size, true, options, 1, false), 1); + m_blob.delete_blob(); + m_blob.properties().set_content_md5(utility::string_t()); + + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(true); check_parallelism(upload_and_download(m_blob, buffer_size, buffer_offset, blob_size, true, options, 1, false), 1); m_blob.delete_blob(); m_blob.properties().set_content_md5(utility::string_t()); + options.set_use_transactional_crc64(false); options.set_use_transactional_md5(true); options.set_store_blob_content_md5(true); check_parallelism(upload_and_download(m_blob, buffer_size, buffer_offset, blob_size, true, options, 1, true), 1); @@ -377,19 +506,28 @@ SUITE(Blob) { const size_t buffer_size = 6 * 1024 * 1024; const size_t blob_size = 4 * 1024 * 1024; - azure::storage::blob_request_options options; const size_t buffer_offsets[2] = { 0, 1024 }; for (auto buffer_offset : buffer_offsets) { + azure::storage::blob_request_options options; options.set_stream_write_size_in_bytes(blob_size); - options.set_use_transactional_md5(false); options.set_store_blob_content_md5(false); options.set_parallelism_factor(1); + + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(false); + check_parallelism(upload_and_download(m_blob, buffer_size, buffer_offset, blob_size, false, options, 1, false), 1); + m_blob.delete_blob(); + m_blob.properties().set_content_md5(utility::string_t()); + + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(true); check_parallelism(upload_and_download(m_blob, buffer_size, buffer_offset, blob_size, false, options, 1, false), 1); m_blob.delete_blob(); m_blob.properties().set_content_md5(utility::string_t()); + options.set_use_transactional_crc64(false); options.set_use_transactional_md5(true); options.set_store_blob_content_md5(true); check_parallelism(upload_and_download(m_blob, buffer_size, buffer_offset, blob_size, false, options, 1, true), 1); @@ -449,7 +587,7 @@ SUITE(Blob) utility::string_t md5_header; m_context.set_sending_request([&md5_header] (web::http::http_request& request, azure::storage::operation_context) { - if (!request.headers().match(_XPLATSTR("x-ms-blob-content-md5"), md5_header)) + if (!request.headers().match(azure::storage::protocol::ms_header_blob_content_md5, md5_header)) { md5_header.clear(); } @@ -468,7 +606,7 @@ SUITE(Blob) auto original_file = concurrency::streams::file_stream::open_istream(file.path()).get(); original_file.read_to_end(original_file_buffer).wait(); original_file.close().wait(); - + concurrency::streams::container_buffer> downloaded_file_buffer; auto downloaded_file = concurrency::streams::file_stream::open_istream(file2.path()).get(); downloaded_file.read_to_end(downloaded_file_buffer).wait(); @@ -521,24 +659,29 @@ SUITE(Blob) CHECK_UTF8_EQUAL(_XPLATSTR("value2"), same_blob.metadata()[_XPLATSTR("key2")]); } - TEST_FIXTURE(block_blob_test_base, block_blob_block_list_use_transactional_md5) + TEST_FIXTURE(block_blob_test_base, block_blob_block_list_use_transactional_checksum) { m_blob.properties().set_content_type(_XPLATSTR("text/plain; charset=utf-8")); utility::string_t md5_header; - m_context.set_sending_request([&md5_header](web::http::http_request& request, azure::storage::operation_context) + utility::string_t crc64_header; + m_context.set_sending_request([&md5_header, &crc64_header](web::http::http_request& request, azure::storage::operation_context) { if (!request.headers().match(web::http::header_names::content_md5, md5_header)) { md5_header.clear(); } + if (!request.headers().match(azure::storage::protocol::ms_header_content_crc64, crc64_header)) + { + crc64_header.clear(); + } }); std::vector blocks; for (uint16_t i = 0; i < 10; i++) { auto id = get_block_id(i); - auto utf8_body = utility::conversions::to_utf8string(utility::conversions::print_string(i)); + auto utf8_body = utility::conversions::to_utf8string(azure::storage::core::convert_to_string(i)); auto stream = concurrency::streams::bytestream::open_istream(std::move(utf8_body)); m_blob.upload_block(id, stream, utility::string_t(), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); blocks.push_back(azure::storage::block_list_item(id)); @@ -546,15 +689,25 @@ SUITE(Blob) azure::storage::blob_request_options options; options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(false); m_blob.upload_block_list(blocks, azure::storage::access_condition(), options, m_context); CHECK_UTF8_EQUAL(utility::string_t(), md5_header); CHECK_UTF8_EQUAL(_XPLATSTR("0123456789"), m_blob.download_text(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context)); options.set_use_transactional_md5(true); + options.set_use_transactional_crc64(false); m_blob.upload_block_list(blocks, azure::storage::access_condition(), options, m_context); + CHECK(!md5_header.empty()); CHECK_UTF8_EQUAL(m_context.request_results().back().content_md5(), md5_header); CHECK_UTF8_EQUAL(_XPLATSTR("0123456789"), m_blob.download_text(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context)); + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(true); + m_blob.upload_block_list(blocks, azure::storage::access_condition(), options, m_context); + CHECK(!crc64_header.empty()); + CHECK_UTF8_EQUAL(m_context.request_results().back().content_crc64(), crc64_header); + CHECK_UTF8_EQUAL(_XPLATSTR("0123456789"), m_blob.download_text(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context)); + m_context.set_sending_request(std::function()); } @@ -623,7 +776,7 @@ SUITE(Blob) for (uint16_t i = 0; i < 10; i++) { auto id = get_block_id(i); - auto utf8_body = utility::conversions::to_utf8string(utility::conversions::print_string(i)); + auto utf8_body = utility::conversions::to_utf8string(azure::storage::core::convert_to_string(i)); auto stream = concurrency::streams::bytestream::open_istream(std::move(utf8_body)); m_blob.upload_block(id, stream, utility::string_t(), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); blocks.push_back(azure::storage::block_list_item(id)); @@ -643,7 +796,7 @@ SUITE(Blob) CHECK_UTF8_EQUAL(_XPLATSTR("12356789"), m_blob.download_text(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context)); auto id = get_block_id(4); - auto utf8_body = utility::conversions::to_utf8string(utility::conversions::print_string(4)); + auto utf8_body = utility::conversions::to_utf8string(azure::storage::core::convert_to_string(4)); auto stream = concurrency::streams::bytestream::open_istream(std::move(utf8_body)); m_blob.upload_block(id, stream, utility::string_t(), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); blocks.insert(blocks.begin(), azure::storage::block_list_item(id)); @@ -654,12 +807,12 @@ SUITE(Blob) m_blob.upload_block_list(blocks, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); CHECK_UTF8_EQUAL(_XPLATSTR("4123567894"), m_blob.download_text(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context)); } - + TEST_FIXTURE(block_blob_test_base, list_uncommitted_blobs) { std::vector buffer; buffer.resize(16 * 1024); - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); auto stream = concurrency::streams::bytestream::open_istream(buffer); auto ucblob = m_container.get_block_blob_reference(_XPLATSTR("ucblob")); ucblob.upload_block(get_block_id(0), stream, utility::string_t(), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); @@ -695,4 +848,1078 @@ SUITE(Blob) m_context.set_response_received(std::function()); } + + TEST_FIXTURE(block_blob_test_base, large_block_blob) + { + std::vector buffer; + buffer.resize(12 * 1024 * 1024); + + azure::storage::blob_request_options options; + CHECK_THROW(options.set_single_blob_upload_threshold_in_bytes(5001 * 1024 * 1024ULL), std::invalid_argument); + CHECK_THROW(options.set_stream_write_size_in_bytes(4001 * 1024 * 1024ULL), std::invalid_argument); + + m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(buffer), azure::storage::access_condition(), options, m_context); + CHECK_EQUAL(2U, m_context.request_results().size()); // CreateContainer + PutBlob + + options.set_single_blob_upload_threshold_in_bytes(buffer.size() / 2); + m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(buffer), azure::storage::access_condition(), options, m_context); + CHECK_EQUAL(6U, m_context.request_results().size()); // PutBlock * 3 + PutBlockList + + options.set_stream_write_size_in_bytes(6 * 1024 * 1024); + m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(buffer), azure::storage::access_condition(), options, m_context); + CHECK_EQUAL(9U, m_context.request_results().size()); // PutBlock * 2 + PutBlockList + } + + // Validate retry of download_range_to_stream_async. + TEST_FIXTURE(block_blob_test_base, block_blob_retry) + { + std::vector buffer; + buffer.resize(1024); + + azure::storage::blob_request_options options; + // attempt to retry one more time by default + options.set_retry_policy(azure::storage::linear_retry_policy(std::chrono::seconds(1), 1)); + m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(buffer), azure::storage::access_condition(), options, m_context); + + Concurrency::streams::basic_ostream target; + pplx::task task; + + // Validate no retry when stream is closed by Casablanca. + { + std::exception actual; + target = currupted_stream::open_ostream(false, 1); + task = m_blob.download_range_to_stream_async(target, 0, 100, azure::storage::access_condition(), options, azure::storage::operation_context()); + CHECK_STORAGE_EXCEPTION(task.get(), INTENDED_ERR_MSG); + CHECK_EQUAL(1, static_cast*>(target.streambuf().get_base().get())->call_count()); + } + + // Validate exception will be propagated correctly even retry failed. + { + std::exception actual; + target = currupted_stream::open_ostream(true, 2); + task = m_blob.download_range_to_stream_async(target, 0, 100, azure::storage::access_condition(), options, azure::storage::operation_context()); + CHECK_STORAGE_EXCEPTION(task.get(), INTENDED_ERR_MSG); + CHECK_EQUAL(2, static_cast*>(target.streambuf().get_base().get())->call_count()); + } + + // Validate no exception thrown when retry success. + { + target = currupted_stream::open_ostream(true, 1); + task = m_blob.download_range_to_stream_async(target, 0, 100, azure::storage::access_condition(), options, azure::storage::operation_context()); + CHECK_NOTHROW(task.get()); + CHECK_EQUAL(2, static_cast*>(target.streambuf().get_base().get())->call_count()); + } + } + + // Validate set standard blob tier for block blob on standard account. + TEST_FIXTURE(block_blob_test_base, block_blob_standard_tier) + { + // preparation + azure::storage::blob_request_options options; + m_blob.upload_text(_XPLATSTR("test"), azure::storage::access_condition(), options, m_context); + + // test can convert hot->cool or cool->hot. + m_blob.set_standard_blob_tier(azure::storage::standard_blob_tier::cool, azure::storage::access_condition(), options, azure::storage::operation_context()); + // validate local has been updated. + CHECK(azure::storage::standard_blob_tier::cool == m_blob.properties().standard_blob_tier()); + // validate server has been updated + m_blob.download_attributes(); + CHECK(azure::storage::standard_blob_tier::cool == m_blob.properties().standard_blob_tier()); + m_blob.set_standard_blob_tier(azure::storage::standard_blob_tier::hot, azure::storage::access_condition(), options, azure::storage::operation_context()); + // validate local has been updated. + CHECK(azure::storage::standard_blob_tier::hot == m_blob.properties().standard_blob_tier()); + // validate server has been updated + m_blob.download_attributes(); + CHECK(azure::storage::standard_blob_tier::hot == m_blob.properties().standard_blob_tier()); + + // test standard storage can set archive. + m_blob.set_standard_blob_tier(azure::storage::standard_blob_tier::archive, azure::storage::access_condition(), options, azure::storage::operation_context()); + // validate server has been updated + m_blob.download_attributes(); + CHECK(azure::storage::standard_blob_tier::archive == m_blob.properties().standard_blob_tier()); + } + + // Validate set standard blob tier for block blob on blob storage account. + TEST_FIXTURE(premium_block_blob_test_base, block_blob_premium_tier) + { + // preparation + azure::storage::blob_request_options options; + m_blob.upload_text(_XPLATSTR("test"), azure::storage::access_condition(), options, m_context); + + // test can convert hot->cool or cool->hot. + m_blob.set_standard_blob_tier(azure::storage::standard_blob_tier::cool, azure::storage::access_condition(), options, azure::storage::operation_context()); + // validate local has been updated. + CHECK(azure::storage::standard_blob_tier::cool == m_blob.properties().standard_blob_tier()); + // validate server has been updated + m_blob.download_attributes(); + CHECK(azure::storage::standard_blob_tier::cool == m_blob.properties().standard_blob_tier()); + m_blob.set_standard_blob_tier(azure::storage::standard_blob_tier::hot, azure::storage::access_condition(), options, azure::storage::operation_context()); + // validate local has been updated. + CHECK(azure::storage::standard_blob_tier::hot == m_blob.properties().standard_blob_tier()); + // validate server has been updated + m_blob.download_attributes(); + CHECK(azure::storage::standard_blob_tier::hot == m_blob.properties().standard_blob_tier()); + + // test premium storage can set archive. + m_blob.set_standard_blob_tier(azure::storage::standard_blob_tier::archive, azure::storage::access_condition(), options, azure::storage::operation_context()); + // validate local has been updated. + CHECK(azure::storage::standard_blob_tier::archive == m_blob.properties().standard_blob_tier()); + // validate server has been updated + m_blob.download_attributes(); + CHECK(azure::storage::standard_blob_tier::archive == m_blob.properties().standard_blob_tier()); + + // test archive storage can set back to archive. + m_blob.set_standard_blob_tier(azure::storage::standard_blob_tier::archive, azure::storage::access_condition(), options, azure::storage::operation_context()); + // validate local has been changed. + CHECK(azure::storage::standard_blob_tier::archive == m_blob.properties().standard_blob_tier()); + // validate server has been changed + m_blob.download_attributes(); + CHECK(azure::storage::standard_blob_tier::archive == m_blob.properties().standard_blob_tier()); + + // test archive storage can set back to cool. + m_blob.set_standard_blob_tier(azure::storage::standard_blob_tier::cool, azure::storage::access_condition(), options, azure::storage::operation_context()); + // validate local has been not been updated. + CHECK(azure::storage::standard_blob_tier::cool == m_blob.properties().standard_blob_tier()); + // validate server has been archive information + m_blob.download_attributes(); + CHECK(azure::storage::archive_status::rehydrate_pending_to_cool == m_blob.properties().archive_status()); + //validate cannot set back to archive immediately + CHECK_STORAGE_EXCEPTION(m_blob.set_standard_blob_tier(azure::storage::standard_blob_tier::archive, azure::storage::access_condition(), options, azure::storage::operation_context()), REHYDRATE_CANNOT_SET_TO_ARCHIVE_ERR_MSG); + } + + TEST_FIXTURE(premium_block_blob_test_base, block_blob_premium_tier_with_lease) + { + // preparation + azure::storage::blob_request_options options; + m_blob.upload_text(_XPLATSTR("test"), azure::storage::access_condition(), options, m_context); + + // acquire a lease + auto lease_id = m_blob.acquire_lease(azure::storage::lease_time(), _XPLATSTR("")); + + // set the acquired lease to access condition. + azure::storage::access_condition condition; + condition.set_lease_id(lease_id); + + // test can convert hot->cool or cool->hot. + m_blob.set_standard_blob_tier(azure::storage::standard_blob_tier::cool, condition, options, azure::storage::operation_context()); + // validate local has been updated. + CHECK(azure::storage::standard_blob_tier::cool == m_blob.properties().standard_blob_tier()); + // validate server has been updated + m_blob.download_attributes(); + CHECK(azure::storage::standard_blob_tier::cool == m_blob.properties().standard_blob_tier()); + m_blob.set_standard_blob_tier(azure::storage::standard_blob_tier::hot, condition, options, azure::storage::operation_context()); + // validate local has been updated. + CHECK(azure::storage::standard_blob_tier::hot == m_blob.properties().standard_blob_tier()); + // validate server has been updated + m_blob.download_attributes(); + CHECK(azure::storage::standard_blob_tier::hot == m_blob.properties().standard_blob_tier()); + + // test premium storage can set archive. + m_blob.set_standard_blob_tier(azure::storage::standard_blob_tier::archive, condition, options, azure::storage::operation_context()); + // validate local has been updated. + CHECK(azure::storage::standard_blob_tier::archive == m_blob.properties().standard_blob_tier()); + // validate server has been updated + m_blob.download_attributes(); + CHECK(azure::storage::standard_blob_tier::archive == m_blob.properties().standard_blob_tier()); + + // test archive storage can set back to archive. + m_blob.set_standard_blob_tier(azure::storage::standard_blob_tier::archive, condition, options, azure::storage::operation_context()); + // validate local has been changed. + CHECK(azure::storage::standard_blob_tier::archive == m_blob.properties().standard_blob_tier()); + // validate server has been changed + m_blob.download_attributes(); + CHECK(azure::storage::standard_blob_tier::archive == m_blob.properties().standard_blob_tier()); + + // test archive storage can set back to cool. + m_blob.set_standard_blob_tier(azure::storage::standard_blob_tier::cool, condition, options, azure::storage::operation_context()); + // validate local has been not been updated. + CHECK(azure::storage::standard_blob_tier::cool == m_blob.properties().standard_blob_tier()); + // validate server has been archive information + m_blob.download_attributes(); + CHECK(azure::storage::archive_status::rehydrate_pending_to_cool == m_blob.properties().archive_status()); + //validate cannot set back to archive immediately + CHECK_STORAGE_EXCEPTION(m_blob.set_standard_blob_tier(azure::storage::standard_blob_tier::archive, condition, options, azure::storage::operation_context()), REHYDRATE_CANNOT_SET_TO_ARCHIVE_ERR_MSG); + CHECK(azure::storage::archive_status::rehydrate_pending_to_cool == m_blob.properties().archive_status()); + + // validate no lease id would report failure. + CHECK_STORAGE_EXCEPTION(m_blob.set_standard_blob_tier(azure::storage::standard_blob_tier::archive, azure::storage::access_condition(), options, azure::storage::operation_context()), ACTIVE_LEASE_ERROR_MESSAGE); + CHECK(azure::storage::archive_status::rehydrate_pending_to_cool == m_blob.properties().archive_status()); + } + + TEST_FIXTURE(block_blob_test_base, block_blob_create_delete_cancellation) + { + + { + // cancel the cancellation prior to the operation + auto cancel_token_src = pplx::cancellation_token_source(); + cancel_token_src.cancel(); + std::string ex_msg; + + try + { + auto task_result = m_blob.upload_text_async(_XPLATSTR("test"), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + // cancel the cancellation during the operation + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.upload_text_async(_XPLATSTR("test"), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(3)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.upload_text_async(_XPLATSTR("test"), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + task_result.get(); + // cancel the cancellation after the operation + cancel_token_src.cancel(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + + { + // cancel the cancellation prior to the operation + auto cancel_token_src = pplx::cancellation_token_source(); + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + // cancel the cancellation during the operation + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(3)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + task_result.get(); + // cancel the cancellation after the operation + cancel_token_src.cancel(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + } + + TEST_FIXTURE(block_blob_test_base, block_blob_create_delete_timeout) + { + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(1)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.upload_text_async(_XPLATSTR("test"), azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(3)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.upload_text_async(_XPLATSTR("test"), azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(10000)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.upload_text_async(_XPLATSTR("test"), azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(1)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(3)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(10000)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + } + + TEST_FIXTURE(block_blob_test_base, block_blob_create_delete_cancellation_timeout) + { + { + //when cancellation first + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(100)); + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + cancel_token_src.cancel(); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + //when timeout first + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(3)); + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(30)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + } + + TEST_FIXTURE(block_blob_test_base, block_blob_open_read_write_cancellation) + { + std::vector buffer; + buffer.resize(4 * 1024 * 1024); + fill_buffer(buffer); + + { + // cancel the cancellation prior to the operation + auto cancel_token_src = pplx::cancellation_token_source(); + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + // cancel the cancellation prior to the operation and write to a canceled ostream. + auto cancel_token_src = pplx::cancellation_token_source(); + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + // cancel the cancellation during the operation + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + std::this_thread::sleep_for(std::chrono::milliseconds(10)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + // cancel the cancellation after the operation + cancel_token_src.cancel(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + + m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(buffer)); + + { + auto cancel_token_src = pplx::cancellation_token_source(); + // cancel the cancellation prior to the operation + cancel_token_src.cancel(); + + std::string ex_msg; + + + try + { + auto task_result = m_blob.open_read_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + auto is = task_result.get(); + is.read().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + // cancel the cancellation during the operation + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_read_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(10)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + auto is = task_result.get(); + is.read().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_read_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(10)); //sleep for sometime before canceling the request and see result. + auto is = task_result.get(); + is.read().get(); + // cancel the cancellation after the operation + cancel_token_src.cancel(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + + } + + TEST_FIXTURE(block_blob_test_base, block_blob_open_read_write_timeout) + { + std::vector buffer; + buffer.resize(4 * 1024 * 1024); + fill_buffer(buffer); + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(1)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(azure::storage::access_condition(), options, m_context); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(20)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(azure::storage::access_condition(), options, m_context); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::seconds(20)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(azure::storage::access_condition(), options, m_context); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + + m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(buffer)); + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(1)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_read_async(azure::storage::access_condition(), options, m_context); + auto is = task_result.get(); + is.read().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(20)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_read_async(azure::storage::access_condition(), options, m_context); + auto is = task_result.get(); + is.read().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::seconds(30)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_read_async(azure::storage::access_condition(), options, m_context); + auto is = task_result.get(); + is.read().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + } + + TEST_FIXTURE(block_blob_test_base, block_blob_open_read_write_cancellation_timeout) + { + std::vector buffer; + buffer.resize(4 * 1024 * 1024); + fill_buffer(buffer); + + { + //when cancellation first + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(100)); + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + auto os = task_result.get(); + cancel_token_src.cancel(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + //when timeout first + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(10)); + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + std::this_thread::sleep_for(std::chrono::milliseconds(30)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + } + + TEST_FIXTURE(block_blob_test_base, block_blob_concurrent_upload_cancellation_timeout) + { + utility::size64_t length = 260 * 1024 * 1024; + std::vector buffer; + buffer.resize(length); + fill_buffer(buffer); + + { + auto cancel_token_src = pplx::cancellation_token_source(); + auto options = azure::storage::blob_request_options(); + options.set_parallelism_factor(4); + options.set_maximum_execution_time(std::chrono::milliseconds(1000)); + // cancel the cancellation prior to the operation + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.upload_from_stream_async(concurrency::streams::bytestream::open_istream(buffer), length, azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + auto options = azure::storage::blob_request_options(); + options.set_parallelism_factor(4); + options.set_maximum_execution_time(std::chrono::milliseconds(1000)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.upload_from_stream_async(concurrency::streams::bytestream::open_istream(buffer), length, azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(300)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_parallelism_factor(4); + options.set_maximum_execution_time(std::chrono::milliseconds(1000)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.upload_from_stream_async(concurrency::streams::bytestream::open_istream(buffer), length, azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + } + + TEST_FIXTURE(block_blob_test_base, block_blob_single_upload_cancellation_timeout) + { + utility::size64_t length = 128 * 1024 * 1024; + std::vector buffer; + buffer.resize(length); + fill_buffer(buffer); + + { + auto cancel_token_src = pplx::cancellation_token_source(); + auto options = azure::storage::blob_request_options(); + options.set_single_blob_upload_threshold_in_bytes(length * 2); + options.set_maximum_execution_time(std::chrono::milliseconds(500)); + // cancel the cancellation prior to the operation + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.upload_from_stream_async(concurrency::streams::bytestream::open_istream(buffer), length, azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + auto options = azure::storage::blob_request_options(); + options.set_single_blob_upload_threshold_in_bytes(length * 2); + options.set_maximum_execution_time(std::chrono::milliseconds(500)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.upload_from_stream_async(concurrency::streams::bytestream::open_istream(buffer), length, azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(300)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_single_blob_upload_threshold_in_bytes(length * 2); + options.set_maximum_execution_time(std::chrono::milliseconds(500)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.upload_from_stream_async(concurrency::streams::bytestream::open_istream(buffer), length, azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + } + + TEST_FIXTURE(block_blob_test_base, block_blob_cpkv) + { + utility::size64_t length = 128 * 1024; + std::vector buffer(length); + fill_buffer(buffer); + auto empty_options = azure::storage::blob_request_options(); + auto cpk_options = azure::storage::blob_request_options(); + std::vector key(32); + fill_buffer(key); + cpk_options.set_encryption_key(key); + + m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(buffer), length, azure::storage::access_condition(), cpk_options, m_context); + + for (const auto& options : { empty_options, cpk_options }) + { + concurrency::streams::container_buffer> download_buffer; + concurrency::streams::ostream download_stream(download_buffer); + if (options.encryption_key().empty()) + { + CHECK_THROW(m_blob.download_to_stream(download_stream, azure::storage::access_condition(), options, m_context), azure::storage::storage_exception); + } + else + { + m_blob.download_to_stream(download_stream, azure::storage::access_condition(), options, m_context); + CHECK(!m_blob.properties().encryption_key_sha256().empty()); + CHECK(buffer == download_buffer.collection()); + } + } + + auto binary_data = get_random_binary_data(); + binary_data.resize(10); + utility::string_t block_id = utility::conversions::to_base64(binary_data); + m_blob.upload_block(block_id, concurrency::streams::bytestream::open_istream(buffer), azure::storage::checksum_none, azure::storage::access_condition(), cpk_options, m_context); + std::vector blocks_id; + blocks_id.emplace_back(block_id); + m_blob.upload_block_list(blocks_id, azure::storage::access_condition(), cpk_options, m_context); + + for (const auto& options : { empty_options, cpk_options }) + { + concurrency::streams::container_buffer> download_buffer; + concurrency::streams::ostream download_stream(download_buffer); + if (options.encryption_key().empty()) + { + CHECK_THROW(m_blob.download_to_stream(download_stream, azure::storage::access_condition(), options, m_context), azure::storage::storage_exception); + } + else + { + m_blob.download_to_stream(download_stream, azure::storage::access_condition(), options, m_context); + CHECK(!m_blob.properties().encryption_key_sha256().empty()); + CHECK(buffer == download_buffer.collection()); + } + } + + m_blob.properties().set_content_type(_XPLATSTR("application/octet-stream")); + m_blob.properties().set_content_language(_XPLATSTR("en-US")); + CHECK_THROW(m_blob.upload_metadata(azure::storage::access_condition(), empty_options, m_context), azure::storage::storage_exception); + CHECK_THROW(m_blob.download_attributes(azure::storage::access_condition(), empty_options, m_context), azure::storage::storage_exception); + m_blob.upload_properties(azure::storage::access_condition(), empty_options, m_context); + m_blob.upload_metadata(azure::storage::access_condition(), cpk_options, m_context); + m_blob.download_attributes(azure::storage::access_condition(), cpk_options, m_context); + + CHECK_THROW(m_blob.set_standard_blob_tier(azure::storage::standard_blob_tier::cool, azure::storage::access_condition(), empty_options, m_context), azure::storage::storage_exception); + CHECK_THROW(m_blob.set_standard_blob_tier(azure::storage::standard_blob_tier::cool, azure::storage::access_condition(), cpk_options, m_context), azure::storage::storage_exception); + + CHECK_THROW(m_blob.create_snapshot(azure::storage::cloud_metadata(), azure::storage::access_condition(), empty_options, m_context), azure::storage::storage_exception); + auto snapshot_blob = m_blob.create_snapshot(azure::storage::cloud_metadata(), azure::storage::access_condition(), cpk_options, m_context); + CHECK(snapshot_blob.is_snapshot()); + for (const auto& options : { empty_options, cpk_options }) + { + concurrency::streams::container_buffer> download_buffer; + concurrency::streams::ostream download_stream(download_buffer); + if (options.encryption_key().empty()) + { + CHECK_THROW(snapshot_blob.download_to_stream(download_stream, azure::storage::access_condition(), options, m_context), azure::storage::storage_exception); + } + else + { + snapshot_blob.download_to_stream(download_stream, azure::storage::access_condition(), options, m_context); + CHECK(!snapshot_blob.properties().encryption_key_sha256().empty()); + CHECK(buffer == download_buffer.collection()); + } + } + } } diff --git a/Microsoft.WindowsAzure.Storage/tests/cloud_file_directory_test.cpp b/Microsoft.WindowsAzure.Storage/tests/cloud_file_directory_test.cpp index d9436b2e..c77d9d8f 100644 --- a/Microsoft.WindowsAzure.Storage/tests/cloud_file_directory_test.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/cloud_file_directory_test.cpp @@ -19,6 +19,8 @@ #include "file_test_base.h" #include "check_macros.h" +#include "wascore/util.h" + #pragma region Fixture #pragma endregion @@ -103,9 +105,11 @@ SUITE(File) CHECK(m_directory.metadata().empty()); CHECK(m_directory.properties().etag().empty()); CHECK(!m_directory.properties().last_modified().is_initialized()); + CHECK(!m_directory.properties().server_encrypted()); m_directory.create_if_not_exists(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); m_directory.download_attributes(); + CHECK(m_directory.properties().server_encrypted()); CHECK(m_directory.get_parent_share_reference().is_valid()); check_equal(m_share, m_directory.get_parent_share_reference()); @@ -233,6 +237,7 @@ SUITE(File) CHECK(file.metadata().empty()); CHECK(file.properties().etag().empty()); CHECK(!file.properties().last_modified().is_initialized()); + CHECK_EQUAL(512U, file.properties().length()); for (auto file_name = files_three.begin(); file_name != files_three.end(); file_name++) { @@ -254,6 +259,7 @@ SUITE(File) CHECK(file.metadata().empty()); CHECK(file.properties().etag().empty()); CHECK(!file.properties().last_modified().is_initialized()); + CHECK_EQUAL(512U, file.properties().length()); for (auto file_name = files_two.begin(); file_name != files_two.end(); file_name++) { @@ -278,6 +284,7 @@ SUITE(File) CHECK(file.metadata().empty()); CHECK(file.properties().etag().empty()); CHECK(!file.properties().last_modified().is_initialized()); + CHECK_EQUAL(512U, file.properties().length()); for (auto file_name = files_one.begin(); file_name != files_one.end(); file_name++) { @@ -294,6 +301,70 @@ SUITE(File) CHECK(files_one.empty()); } + TEST_FIXTURE(file_directory_test_base, directory_list_files_and_directories_with_prefix) + { + m_directory.create_if_not_exists(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + + auto prefix = _XPLATSTR("t") + get_random_string(3); + auto dir_prefix = prefix + _XPLATSTR("dir"); + auto file_prefix = prefix + _XPLATSTR("file"); + auto exclude_prefix = _XPLATSTR("exclude"); + + std::vector directories; + std::vector files; + for (int i = 0; i < get_random_int32() % 3 + 1; ++i) + { + auto subdirectory = m_directory.get_subdirectory_reference(dir_prefix + azure::storage::core::convert_to_string(i)); + subdirectory.create(); + directories.push_back(subdirectory); + + auto file = m_directory.get_file_reference(file_prefix + azure::storage::core::convert_to_string(i)); + file.create(1); + files.push_back(file); + + m_directory.get_subdirectory_reference(exclude_prefix + azure::storage::core::convert_to_string(i)).create(); + } + + size_t num_items_expected = directories.size() + files.size(); + size_t num_items_actual = 0; + for (auto&& item : m_directory.list_files_and_directories(prefix)) + { + ++num_items_actual; + if (item.is_directory()) + { + auto actual = item.as_directory(); + CHECK(actual.get_parent_share_reference().is_valid()); + check_equal(m_share, actual.get_parent_share_reference()); + + auto it_found = std::find_if(directories.begin(), directories.end(), [&actual](const azure::storage::cloud_file_directory& expect) + { + return actual.name() == expect.name(); + }); + CHECK(it_found != directories.end()); + check_equal(*it_found, actual); + directories.erase(it_found); + } + else if (item.is_file()) + { + auto actual = item.as_file(); + CHECK(actual.get_parent_share_reference().is_valid()); + check_equal(m_share, actual.get_parent_share_reference()); + + auto it_found = std::find_if(files.begin(), files.end(), [&actual](const azure::storage::cloud_file& expect) + { + return actual.name() == expect.name(); + }); + CHECK(it_found != files.end()); + check_equal(*it_found, actual); + files.erase(it_found); + } + } + + CHECK_EQUAL(num_items_expected, num_items_actual); + CHECK(directories.empty()); + CHECK(files.empty()); + } + TEST_FIXTURE(file_directory_test_base, directory_get_directory_ref) { m_directory.create_if_not_exists(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); diff --git a/Microsoft.WindowsAzure.Storage/tests/cloud_file_share_test.cpp b/Microsoft.WindowsAzure.Storage/tests/cloud_file_share_test.cpp index e4b8170d..87862886 100644 --- a/Microsoft.WindowsAzure.Storage/tests/cloud_file_share_test.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/cloud_file_share_test.cpp @@ -19,6 +19,8 @@ #include "file_test_base.h" #include "check_macros.h" +#include "wascore/util.h" + #pragma region Fixture #pragma endregion @@ -43,7 +45,7 @@ SUITE(File) TEST_FIXTURE(file_share_test_base, share_create_delete_with_quotas) { - size_t quota = rand() % 5120 + 1; + size_t quota = get_random_int32() % 5120 + 1; CHECK(!m_share.exists(azure::storage::file_request_options(), m_context)); CHECK(!m_share.delete_share_if_exists(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context)); @@ -195,9 +197,11 @@ SUITE(File) TEST_FIXTURE(file_share_test_base, share_stats) { - m_share.create_if_not_exists(azure::storage::file_request_options(), m_context); + m_share.create(azure::storage::file_request_options(), m_context); auto quota = m_share.download_share_usage(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); CHECK_EQUAL(0, quota); + auto quota_in_bytes = m_share.download_share_usage_in_bytes(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + CHECK_EQUAL(0, quota_in_bytes); } // share level sas test @@ -237,7 +241,7 @@ SUITE(File) policy.set_expiry(utility::datetime::utc_now() + utility::datetime::from_minutes(30)); auto sas_token = m_share.get_shared_access_signature(policy); - auto file = m_share.get_root_directory_reference().get_file_reference(_XPLATSTR("file") + utility::conversions::print_string((int)i)); + auto file = m_share.get_root_directory_reference().get_file_reference(_XPLATSTR("file") + azure::storage::core::convert_to_string((int)i)); file.create_if_not_exists(512U, azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); file.properties().set_cache_control(_XPLATSTR("no-transform")); file.properties().set_content_disposition(_XPLATSTR("attachment")); @@ -249,4 +253,24 @@ SUITE(File) check_access(sas_token, permissions, azure::storage::cloud_file_shared_access_headers(), file); } } -} \ No newline at end of file + + TEST_FIXTURE(file_share_test_base, file_permission) + { + utility::size64_t quota = 512; + m_share.create_if_not_exists(quota, azure::storage::file_request_options(), m_context); + auto file = m_share.get_root_directory_reference().get_file_reference(_XPLATSTR("test")); + utility::string_t content = _XPLATSTR("testtargetfile"); + file.create_if_not_exists(content.length()); + file.upload_text(content, azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + file.download_attributes(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + + utility::string_t permission_key = file.properties().permission_key(); + CHECK(!permission_key.empty()); + + utility::string_t permission = m_share.download_file_permission(permission_key); + CHECK(!permission.empty()); + + utility::string_t permission_key2 = m_share.upload_file_permission(permission); + CHECK(!permission_key2.empty()); + } +} diff --git a/Microsoft.WindowsAzure.Storage/tests/cloud_file_test.cpp b/Microsoft.WindowsAzure.Storage/tests/cloud_file_test.cpp index 375c2b03..5685dc3c 100644 --- a/Microsoft.WindowsAzure.Storage/tests/cloud_file_test.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/cloud_file_test.cpp @@ -20,6 +20,8 @@ #include "check_macros.h" #include "blob_test_base.h" +#include "wascore/util.h" + #pragma region Fixture bool file_test_base::wait_for_copy(azure::storage::cloud_file& file) @@ -45,6 +47,9 @@ SUITE(File) CHECK(m_file.create_if_not_exists(1024U, azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context)); CHECK(!m_file.create_if_not_exists(1024U, azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context)); CHECK_EQUAL(m_file.properties().length(), 1024U); + m_file.download_attributes(); + + CHECK_EQUAL(m_file.properties().server_encrypted(), true); CHECK(m_file.exists(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context)); @@ -108,6 +113,167 @@ SUITE(File) CHECK_EQUAL(0U, same_file.metadata().size()); } + TEST_FIXTURE(file_test_base, file_properties) + { + m_file.create_if_not_exists(1024U, azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + m_file.download_attributes(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + CHECK_EQUAL(1024U, m_file.properties().length()); + + { + m_file.properties().set_content_md5(_XPLATSTR("")); + m_file.upload_properties(); + CHECK(_XPLATSTR("") == m_file.properties().content_md5()); + + auto same_file = m_file.get_parent_share_reference().get_directory_reference(m_directory.name()).get_file_reference(m_file.name()); + auto stream = concurrency::streams::container_stream>::open_ostream(); + azure::storage::file_request_options options; + options.set_use_transactional_md5(true); + same_file.download_range_to_stream(stream, 0, 128, azure::storage::file_access_condition(), options, m_context); + CHECK(_XPLATSTR("") == same_file.properties().content_md5()); + } + } + + TEST_FIXTURE(file_test_base, file_smb_properties) + { + m_file.properties().set_permission(azure::storage::cloud_file_properties::inherit); + m_file.properties().set_attributes(azure::storage::cloud_file_attributes::none); + m_file.properties().set_creation_time(azure::storage::cloud_file_properties::now); + m_file.properties().set_last_write_time(azure::storage::cloud_file_properties::now); + + m_file.create(1024); + auto properties = m_file.properties(); + CHECK(properties.permission().empty()); + CHECK(!properties.permission_key().empty()); + CHECK(properties.attributes() != azure::storage::cloud_file_attributes::none); + CHECK(properties.creation_time().is_initialized()); + CHECK(properties.last_write_time().is_initialized()); + CHECK(properties.change_time().is_initialized()); + CHECK(!properties.file_id().empty()); + CHECK(!properties.file_parent_id().empty()); + + m_file.properties().set_permission(azure::storage::cloud_file_properties::preserve); + m_file.properties().set_attributes(azure::storage::cloud_file_attributes::preserve); + m_file.properties().set_creation_time(azure::storage::cloud_file_properties::preserve); + m_file.properties().set_last_write_time(azure::storage::cloud_file_properties::preserve); + + m_file.upload_properties(); + CHECK(m_file.properties().permission().empty()); + CHECK(m_file.properties().permission_key() == properties.permission_key()); + CHECK(m_file.properties().attributes() == properties.attributes()); + CHECK(m_file.properties().creation_time() == properties.creation_time()); + CHECK(m_file.properties().last_write_time() == properties.last_write_time()); + CHECK(m_file.properties().file_id() == properties.file_id()); + CHECK(m_file.properties().file_parent_id() == properties.file_parent_id()); + + utility::string_t permission = m_share.download_file_permission(m_file.properties().permission_key()); + m_file.properties().set_permission(permission); + m_file.properties().set_attributes( + azure::storage::cloud_file_attributes::readonly | azure::storage::cloud_file_attributes::hidden | azure::storage::cloud_file_attributes::system | + azure::storage::cloud_file_attributes::archive | azure::storage::cloud_file_attributes::temporary | azure::storage::cloud_file_attributes::offline | + azure::storage::cloud_file_attributes::not_content_indexed | azure::storage::cloud_file_attributes::no_scrub_data); + auto new_attributes = m_file.properties().attributes(); + auto current_time = utility::datetime::utc_now(); + m_file.properties().set_creation_time(current_time); + m_file.properties().set_last_write_time(current_time); + + m_file.upload_properties(); + CHECK(m_file.properties().permission().empty()); + CHECK(!m_file.properties().permission_key().empty()); + CHECK(m_file.properties().attributes() == new_attributes); + CHECK(m_file.properties().creation_time() == current_time); + CHECK(m_file.properties().last_write_time() == current_time); + + m_file.upload_properties(); + } + + TEST_FIXTURE(file_test_base, directory_smb_properties) + { + azure::storage::cloud_file_directory directory = m_share.get_directory_reference(get_random_string()); + directory.properties().set_permission(azure::storage::cloud_file_directory_properties::inherit); + directory.properties().set_attributes(azure::storage::cloud_file_attributes::none); + directory.properties().set_creation_time(azure::storage::cloud_file_directory_properties::now); + directory.properties().set_last_write_time(azure::storage::cloud_file_directory_properties::now); + directory.create(); + + auto properties = directory.properties(); + CHECK(properties.permission().empty()); + CHECK(!properties.permission_key().empty()); + CHECK(properties.attributes() != azure::storage::cloud_file_attributes::none); + CHECK(properties.creation_time().is_initialized()); + CHECK(properties.last_write_time().is_initialized()); + CHECK(!properties.file_id().empty()); + + directory.properties().set_permission(azure::storage::cloud_file_directory_properties::preserve); + directory.properties().set_attributes(azure::storage::cloud_file_attributes::preserve); + directory.properties().set_creation_time(azure::storage::cloud_file_directory_properties::preserve); + directory.properties().set_last_write_time(azure::storage::cloud_file_directory_properties::preserve); + + directory.upload_properties(); + CHECK(directory.properties().permission().empty()); + CHECK(directory.properties().permission_key() == properties.permission_key()); + CHECK(directory.properties().attributes() == properties.attributes()); + CHECK(directory.properties().creation_time() == properties.creation_time()); + CHECK(directory.properties().last_write_time() == properties.last_write_time()); + CHECK(directory.properties().file_id() == properties.file_id()); + CHECK(directory.properties().file_parent_id() == properties.file_parent_id()); + + utility::string_t permission = m_share.download_file_permission(directory.properties().permission_key()); + directory.properties().set_permission(permission); + directory.properties().set_attributes( + azure::storage::cloud_file_attributes::readonly | azure::storage::cloud_file_attributes::hidden | azure::storage::cloud_file_attributes::system | + azure::storage::cloud_file_attributes::directory | azure::storage::cloud_file_attributes::offline | azure::storage::cloud_file_attributes::not_content_indexed | + azure::storage::cloud_file_attributes::no_scrub_data); + auto new_attributes = directory.properties().attributes(); + auto current_time = utility::datetime::utc_now(); + directory.properties().set_creation_time(current_time); + directory.properties().set_last_write_time(current_time); + + directory.upload_properties(); + CHECK(directory.properties().permission().empty()); + CHECK(!directory.properties().permission_key().empty()); + CHECK(directory.properties().attributes() == new_attributes); + CHECK(directory.properties().creation_time() == current_time); + CHECK(directory.properties().last_write_time() == current_time); + + directory.upload_properties(); + } + + TEST_FIXTURE(file_test_base, file_properties_resize_wont_work) + { + m_file.create_if_not_exists(1024U, azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + m_file.download_attributes(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + CHECK_EQUAL(1024U, m_file.properties().length()); + + //Using newly created properties does not set the size of file back to zero. + m_file.properties() = azure::storage::cloud_file_properties(); + m_file.upload_properties(); + m_file.download_attributes(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + CHECK_EQUAL(1024U, m_file.properties().length()); + + //Using the properties that explicitly equals to zero to upload properties will not set the size of the file back to zero. + m_file.resize(0U, azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + m_file.download_attributes(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + CHECK_EQUAL(0U, m_file.properties().length()); + auto zero_properties = m_file.properties(); + m_file.resize(1024U, azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + m_file.download_attributes(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + CHECK_EQUAL(1024U, m_file.properties().length()); + m_file.properties() = zero_properties; + m_file.upload_properties(); + m_file.download_attributes(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + CHECK_EQUAL(1024U, m_file.properties().length()); + + //Using the properties that explicitly equals to non-zero will not set the size of the file to this non-zero value. + auto non_zero_properties = m_file.properties(); + m_file.resize(0U, azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + m_file.download_attributes(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + CHECK_EQUAL(0U, m_file.properties().length()); + m_file.properties() = non_zero_properties; + m_file.upload_properties(); + m_file.download_attributes(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + CHECK_EQUAL(0U, m_file.properties().length()); + } + TEST_FIXTURE(file_test_base, file_resize) { m_file.create_if_not_exists(1024U, azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); @@ -122,6 +288,10 @@ SUITE(File) m_file.resize(length, azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); m_file.download_attributes(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); CHECK_EQUAL(length, m_file.properties().length()); + //Setting the length back to zero and see that it works. + m_file.resize(0U, azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + m_file.download_attributes(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + CHECK_EQUAL(0U, m_file.properties().length()); } TEST_FIXTURE(file_test_base, file_get_parent_directory_ref) @@ -220,7 +390,7 @@ SUITE(File) for (size_t i = 0; i < 2; ++i) { auto blob_name = this->get_random_string(); - auto container = test_config::instance().account().create_cloud_blob_client().get_container_reference(_XPLATSTR("container")); + auto container = test_config::instance().account().create_cloud_blob_client().get_container_reference(_XPLATSTR("container") + get_random_string()); container.create_if_not_exists(); auto source = container.get_block_blob_reference(blob_name); @@ -235,12 +405,14 @@ SUITE(File) /// create dest files with specified sas credentials, only read access to dest read file and only write access to dest write file. auto dest_file_name = this->get_random_string(); auto dest = m_directory.get_file_reference(dest_file_name); - + /// try to copy from source blob to dest file, use dest_read_file to check copy stats. auto copy_id = dest.start_copy(source_blob, azure::storage::access_condition(), azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); CHECK(wait_for_copy(dest)); CHECK_THROW(dest.abort_copy(copy_id, azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context), azure::storage::storage_exception); CHECK_EQUAL(web::http::status_codes::Conflict, m_context.request_results().back().http_status_code()); + + container.delete_container(); } } @@ -287,9 +459,11 @@ SUITE(File) { utility::string_t content = _XPLATSTR("content"); m_file.create(content.length(), azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); + CHECK(!m_file.properties().server_encrypted()); m_file.upload_text(content, azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); auto download_content = m_file.download_text(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); CHECK(content == download_content); + CHECK(m_file.properties().server_encrypted()); } TEST_FIXTURE(file_test_base, file_upload_download_from_file) @@ -363,7 +537,7 @@ SUITE(File) { auto range = ranges1.at(0); CHECK(range.start_offset() == 0); - CHECK((range.end_offset() - range.start_offset() + 1) == content.length()); + CHECK(size_t(range.end_offset() - range.start_offset() + 1) == content.length()); } m_file.clear_range(0, content.length()); auto ranges_clear = m_file.list_ranges(0, 2048, azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); @@ -373,7 +547,7 @@ SUITE(File) { auto range = ranges1.at(0); CHECK(range.start_offset() == 0); - CHECK((range.end_offset() - range.start_offset() + 1) == content.length()); + CHECK(size_t(range.end_offset() - range.start_offset() + 1) == content.length()); } // verify write range with total length larger than the content. @@ -435,7 +609,7 @@ SUITE(File) policy.set_start(utility::datetime::utc_now() - utility::datetime::from_minutes(5)); policy.set_expiry(utility::datetime::utc_now() + utility::datetime::from_minutes(30)); - auto file = m_share.get_root_directory_reference().get_file_reference(_XPLATSTR("file") + utility::conversions::print_string((int)i)); + auto file = m_share.get_root_directory_reference().get_file_reference(_XPLATSTR("file") + azure::storage::core::convert_to_string((int)i)); file.create_if_not_exists(512U, azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); file.properties().set_cache_control(_XPLATSTR("no-transform")); file.properties().set_content_disposition(_XPLATSTR("attachment")); @@ -472,6 +646,14 @@ SUITE(File) CHECK_UTF8_EQUAL(content[i], download_content); file.download_attributes(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context); CHECK(!file.properties().content_md5().empty()); + + auto same_file = m_directory.get_file_reference(filename[i]); + concurrency::streams::container_buffer> buff; + same_file.download_range_to_stream(buff.create_ostream(), 0, content[i].length() / 2); + std::vector& data = buff.collection(); + std::string download_partial_content(data.begin(), data.end()); + CHECK_UTF8_EQUAL(content[i].substr(0, content[i].length() / 2), download_partial_content); + CHECK_UTF8_EQUAL(file.properties().content_md5(), same_file.properties().content_md5()); } } @@ -529,6 +711,102 @@ SUITE(File) } } + /// + /// Test parallel download wit offset + /// + TEST_FIXTURE(file_test_base, parallel_download_with_offset) + { + // file with size larger than 32MB. + // With offset not zero. + { + auto file_name = get_random_string(20); + auto file = m_directory.get_file_reference(file_name); + size_t target_length = 100 * 1024 * 1024; + azure::storage::file_request_options option; + option.set_parallelism_factor(2); + std::vector data; + data.resize(target_length); + fill_buffer(data); + concurrency::streams::container_buffer> upload_buffer(data); + file.upload_from_stream(upload_buffer.create_istream(), azure::storage::file_access_condition(), option, m_context); + + // download target file in parallel. + azure::storage::operation_context context; + concurrency::streams::container_buffer> download_buffer; + + utility::size64_t actual_offset = get_random_int32() % 255 + 1; + utility::size64_t actual_length = target_length - actual_offset; + file.download_range_to_stream(download_buffer.create_ostream(), actual_offset, actual_length, azure::storage::file_access_condition(), option, context); + + check_parallelism(context, 2); + CHECK(file.properties().size() == target_length); + CHECK(download_buffer.collection().size() == actual_length); + CHECK(std::equal(data.begin() + actual_offset, data.end(), download_buffer.collection().begin())); + } + + // file with size larger than 32MB. + // With offset not zero, length = max. + { + auto file_name = get_random_string(20); + auto file = m_directory.get_file_reference(file_name); + size_t target_length = 100 * 1024 * 1024; + azure::storage::file_request_options option; + option.set_parallelism_factor(2); + std::vector data; + data.resize(target_length); + fill_buffer(data); + concurrency::streams::container_buffer> upload_buffer(data); + file.upload_from_stream(upload_buffer.create_istream(), azure::storage::file_access_condition(), option, m_context); + + // download target file in parallel. + azure::storage::operation_context context; + concurrency::streams::container_buffer> download_buffer; + + utility::size64_t actual_offset = get_random_int32() % 255 + 1; + utility::size64_t actual_length = target_length - actual_offset; + file.download_range_to_stream(download_buffer.create_ostream(), actual_offset, std::numeric_limits::max(), azure::storage::file_access_condition(), option, context); + + check_parallelism(context, 2); + CHECK(file.properties().size() == target_length); + CHECK(download_buffer.collection().size() == actual_length); + CHECK(std::equal(data.begin() + actual_offset, data.end(), download_buffer.collection().begin())); + } + } + + /// + /// Test parallel download wit length too large + /// + TEST_FIXTURE(file_test_base, parallel_download_with_length_too_large) + { + // file with size larger than 32MB. + // With offset not zero. + { + auto file_name = get_random_string(20); + auto file = m_directory.get_file_reference(file_name); + size_t target_length = 100 * 1024 * 1024; + azure::storage::file_request_options option; + option.set_parallelism_factor(10); + std::vector data; + data.resize(target_length); + fill_buffer(data); + concurrency::streams::container_buffer> upload_buffer(data); + file.upload_from_stream(upload_buffer.create_istream(), azure::storage::file_access_condition(), option, m_context); + + // download target file in parallel. + azure::storage::operation_context context; + concurrency::streams::container_buffer> download_buffer; + + utility::size64_t actual_offset = get_random_int32() % 255 + 1; + utility::size64_t actual_length = target_length - actual_offset; + file.download_range_to_stream(download_buffer.create_ostream(), actual_offset, actual_length * 2, azure::storage::file_access_condition(), option, context); + + check_parallelism(context, 10); + CHECK(file.properties().size() == target_length); + CHECK(download_buffer.collection().size() == actual_length); + CHECK(std::equal(data.begin() + actual_offset, data.end(), download_buffer.collection().begin())); + } + } + TEST_FIXTURE(file_test_base, parallel_download_with_md5) { // transactional md5 enabled. @@ -602,4 +880,185 @@ SUITE(File) check_parallelism(context, 1); CHECK(file.properties().size() == target_length); } -} \ No newline at end of file + + TEST_FIXTURE(file_test_base, file_lease) + { + m_file.create(1024); + CHECK(azure::storage::lease_status::unspecified == m_file.properties().lease_status()); + CHECK(azure::storage::lease_state::unspecified == m_file.properties().lease_state()); + CHECK(azure::storage::lease_duration::unspecified == m_file.properties().lease_duration()); + m_file.download_attributes(); + CHECK(azure::storage::lease_status::unlocked == m_file.properties().lease_status()); + CHECK(azure::storage::lease_state::available == m_file.properties().lease_state()); + CHECK(azure::storage::lease_duration::unspecified == m_file.properties().lease_duration()); + + // Acquire + utility::string_t lease_id = m_file.acquire_lease(utility::string_t()); + CHECK(azure::storage::lease_status::unspecified == m_file.properties().lease_status()); + CHECK(azure::storage::lease_state::unspecified == m_file.properties().lease_state()); + CHECK(azure::storage::lease_duration::unspecified == m_file.properties().lease_duration()); + m_file.download_attributes(); + CHECK(azure::storage::lease_status::locked == m_file.properties().lease_status()); + CHECK(azure::storage::lease_state::leased == m_file.properties().lease_state()); + CHECK(azure::storage::lease_duration::infinite == m_file.properties().lease_duration()); + + // Change + utility::string_t lease_id2 = utility::uuid_to_string(utility::new_uuid()); + azure::storage::file_access_condition condition; + condition.set_lease_id(lease_id); + lease_id = m_file.change_lease(lease_id2, condition); + utility::details::inplace_tolower(lease_id); + utility::details::inplace_tolower(lease_id2); + CHECK(lease_id == lease_id2); + CHECK(azure::storage::lease_status::unspecified == m_file.properties().lease_status()); + CHECK(azure::storage::lease_state::unspecified == m_file.properties().lease_state()); + CHECK(azure::storage::lease_duration::unspecified == m_file.properties().lease_duration()); + + // Break + m_file.break_lease(); + CHECK(azure::storage::lease_status::unspecified == m_file.properties().lease_status()); + CHECK(azure::storage::lease_state::unspecified == m_file.properties().lease_state()); + CHECK(azure::storage::lease_duration::unspecified == m_file.properties().lease_duration()); + m_file.download_attributes(); + CHECK(azure::storage::lease_status::unlocked == m_file.properties().lease_status()); + CHECK(azure::storage::lease_state::broken == m_file.properties().lease_state()); + CHECK(azure::storage::lease_duration::unspecified == m_file.properties().lease_duration()); + + lease_id = m_file.acquire_lease(utility::string_t()); + condition.set_lease_id(lease_id); + m_file.break_lease(condition, azure::storage::file_request_options(), m_context); + + // Acquire with proposed lease id + lease_id2 = utility::uuid_to_string(utility::new_uuid()); + lease_id = m_file.acquire_lease(lease_id2); + utility::details::inplace_tolower(lease_id); + utility::details::inplace_tolower(lease_id2); + CHECK(lease_id == lease_id2); + + // Release + CHECK_THROW(m_file.release_lease(condition), azure::storage::storage_exception); + condition.set_lease_id(lease_id); + m_file.release_lease(condition); + CHECK(azure::storage::lease_status::unspecified == m_file.properties().lease_status()); + CHECK(azure::storage::lease_state::unspecified == m_file.properties().lease_state()); + CHECK(azure::storage::lease_duration::unspecified == m_file.properties().lease_duration()); + m_file.download_attributes(); + CHECK(azure::storage::lease_status::unlocked == m_file.properties().lease_status()); + CHECK(azure::storage::lease_state::available == m_file.properties().lease_state()); + CHECK(azure::storage::lease_duration::unspecified == m_file.properties().lease_duration()); + } + + TEST_FIXTURE(file_test_base, file_operations_with_lease) + { + m_file.create(1024); + utility::string_t lease_id = m_file.acquire_lease(utility::string_t()); + + azure::storage::file_access_condition lease_condition; + lease_condition.set_lease_id(lease_id); + azure::storage::file_access_condition wrong_condition; + wrong_condition.set_lease_id(utility::uuid_to_string(utility::new_uuid())); + azure::storage::file_access_condition empty_condition; + + std::vector conditions = + { + empty_condition, wrong_condition, lease_condition + }; + + utility::string_t upload_content = _XPLATSTR("content"); + concurrency::streams::container_buffer> download_buffer; + auto copy_src = m_directory.get_file_reference(_XPLATSTR("copy_src")); + copy_src.create(1024); + copy_src.upload_text(upload_content); + utility::string_t copy_id; + std::vector> funcs = + { + // Create + [&](azure::storage::file_access_condition condition) { m_file.create(2048, condition, azure::storage::file_request_options(), m_context); }, + // Create if not exists + [&](azure::storage::file_access_condition condition) { m_file.create_if_not_exists(2048, condition, azure::storage::file_request_options(), m_context); }, + // Download attributes + [&](azure::storage::file_access_condition condition) { m_file.download_attributes(condition, azure::storage::file_request_options(), m_context); }, + // Exist + [&](azure::storage::file_access_condition condition) { m_file.exists(condition, azure::storage::file_request_options(), m_context); }, + // Upload properties + [&](azure::storage::file_access_condition condition) { m_file.upload_properties(condition, azure::storage::file_request_options(), m_context); }, + // Upload metadata + [&](azure::storage::file_access_condition condition) { m_file.upload_metadata(condition, azure::storage::file_request_options(), m_context); }, + // Resize + [&](azure::storage::file_access_condition condition) { m_file.resize(4096, condition, azure::storage::file_request_options(), m_context); }, + // Upload from stream + [&](azure::storage::file_access_condition condition) { m_file.upload_from_stream(concurrency::streams::bytestream::open_istream(utility::conversions::to_utf8string(upload_content)), condition, azure::storage::file_request_options(), m_context); }, + // Write range + [&](azure::storage::file_access_condition condition) { m_file.write_range(concurrency::streams::bytestream::open_istream(utility::conversions::to_utf8string(upload_content)), 0, utility::string_t(), condition, azure::storage::file_request_options(), m_context); }, + // List ranges + [&](azure::storage::file_access_condition condition) { m_file.list_ranges(0, 4096, condition, azure::storage::file_request_options(), m_context); }, + // Download range + [&](azure::storage::file_access_condition condition) { m_file.download_to_stream(download_buffer.create_ostream(), condition, azure::storage::file_request_options(), m_context); }, + // Clear range + [&](azure::storage::file_access_condition condition) { m_file.clear_range(0, 1, condition, azure::storage::file_request_options(), m_context); }, + // Start copy + [&](azure::storage::file_access_condition condition) + { + auto id = m_file.start_copy(copy_src.uri().primary_uri(), azure::storage::access_condition(), condition, azure::storage::file_request_options(), m_context); + copy_id = id.empty() ? copy_id : id; + }, + // Abort copy + [&](azure::storage::file_access_condition condition) { m_file.abort_copy(copy_id, condition, azure::storage::file_request_options(), m_context); }, + // Delete if exists + [&](azure::storage::file_access_condition condition) { m_file.delete_file_if_exists(condition, azure::storage::file_request_options(), m_context); }, + }; + + std::vector> expected_results = + { + // Create + {0, 0, 1}, + // Create if not exists + {0, 0, 1}, + // Download Attributes + {1, 0, 1}, + // Exist + {1, 0, 1}, + // Upload properties + {0, 0, 1}, + // Upload metadata + {0, 0, 1}, + // Resize + {0, 0, 1}, + // Upload from stream + {0, 0, 1}, + // Write range + {0, 0, 1}, + // List ranges + {1, 0, 1}, + // Download range + {1, 0, 1}, + // Clear range + {0, 0, 1}, + // Start copy + {0, 0, 1}, + // Abort copy + {0, 0, 1}, + // Delete if exists + {0, 0, 1}, + }; + CHECK_EQUAL(funcs.size(), expected_results.size()); + + for (int i = 0; i < funcs.size(); ++i) + { + for (int j = 0; j < conditions.size(); ++j) + { + try + { + funcs[i](conditions[j]); + } + catch (azure::storage::storage_exception& e) + { + if (expected_results[i][j] == true && e.result().http_status_code() == web::http::status_codes::PreconditionFailed) + { + throw; + } + } + } + } + } +} diff --git a/Microsoft.WindowsAzure.Storage/tests/cloud_page_blob_test.cpp b/Microsoft.WindowsAzure.Storage/tests/cloud_page_blob_test.cpp index c8acff64..43c056b8 100644 --- a/Microsoft.WindowsAzure.Storage/tests/cloud_page_blob_test.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/cloud_page_blob_test.cpp @@ -19,6 +19,8 @@ #include "blob_test_base.h" #include "check_macros.h" +#include + #pragma region Fixture void page_blob_test_base::check_page_ranges_equal(const std::vector& page_ranges) @@ -32,6 +34,8 @@ void page_blob_test_base::check_page_ranges_equal(const std::vector pages; utility::string_t md5_header; - m_context.set_sending_request([&md5_header] (web::http::http_request& request, azure::storage::operation_context) + utility::string_t crc64_header; + m_context.set_sending_request([&md5_header, &crc64_header] (web::http::http_request& request, azure::storage::operation_context) { if (!request.headers().match(web::http::header_names::content_md5, md5_header)) { md5_header.clear(); } + if (!request.headers().match(azure::storage::protocol::ms_header_content_crc64, crc64_header)) + { + crc64_header.clear(); + } }); - m_blob.create(12 * 1024 * 1024, 0, azure::storage::access_condition(), options, m_context); + m_blob.create(32 * 1024 * 1024, 0, azure::storage::access_condition(), options, m_context); check_blob_no_stale_property(m_blob); options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(false); for (int i = 0; i < 3; ++i) { - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); auto stream = concurrency::streams::bytestream::open_istream(buffer); azure::storage::page_range range(i * 1024, i * 1024 + buffer.size() - 1); pages.push_back(range); @@ -152,11 +162,12 @@ SUITE(Blob) check_page_ranges_equal(pages); options.set_use_transactional_md5(false); - for (int i = 3; i < 6; ++i) + options.set_use_transactional_crc64(false); + for (int i = 4; i < 8; ++i) { auto md5 = fill_buffer_and_get_md5(buffer); auto stream = concurrency::streams::bytestream::open_istream(buffer); - azure::storage::page_range range(i * 1536, i * 1536 + buffer.size() - 1); + azure::storage::page_range range(i * 1024, i * 1024 + buffer.size() - 1); pages.push_back(range); m_blob.upload_pages(stream, range.start_offset(), md5, azure::storage::access_condition(), options, m_context); CHECK_UTF8_EQUAL(md5, md5_header); @@ -165,49 +176,100 @@ SUITE(Blob) check_page_ranges_equal(pages); options.set_use_transactional_md5(true); - for (int i = 6; i < 9; ++i) + options.set_use_transactional_crc64(false); + for (int i = 9; i < 12; ++i) { auto md5 = fill_buffer_and_get_md5(buffer); auto stream = concurrency::streams::bytestream::open_istream(buffer); - azure::storage::page_range range(i * 2048, i * 2048 + buffer.size() - 1); + azure::storage::page_range range(i * 1024, i * 1024 + buffer.size() - 1); pages.push_back(range); m_blob.upload_pages(stream, range.start_offset(), utility::string_t(), azure::storage::access_condition(), options, m_context); CHECK_UTF8_EQUAL(md5, md5_header); } + check_page_ranges_equal(pages); + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(false); + for (int i = 13; i < 16; ++i) { - // upload a page range of max_block_size + auto crc64 = fill_buffer_and_get_crc64(buffer); + uint64_t crc64_val = azure::storage::crc64(buffer.data(), buffer.size()); + auto stream = concurrency::streams::bytestream::open_istream(buffer); + azure::storage::page_range range(i * 1024, i * 1024 + buffer.size() - 1); + pages.push_back(range); + m_blob.upload_pages(stream, range.start_offset(), crc64_val, azure::storage::access_condition(), options, m_context); + CHECK_UTF8_EQUAL(crc64, crc64_header); + } + + check_page_ranges_equal(pages); + + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(true); + for (int i = 17; i < 20; ++i) + { + auto crc64 = fill_buffer_and_get_crc64(buffer); + auto stream = concurrency::streams::bytestream::open_istream(buffer); + azure::storage::page_range range(i * 1024, i * 1024 + buffer.size() - 1); + pages.push_back(range); + m_blob.upload_pages(stream, range.start_offset(), azure::storage::checksum_none, azure::storage::access_condition(), options, m_context); + CHECK_UTF8_EQUAL(crc64, crc64_header); + } + check_page_ranges_equal(pages); + + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(false); + { + // upload a page range of max_page_size std::vector big_buffer; - big_buffer.resize(azure::storage::protocol::max_block_size); + big_buffer.resize(azure::storage::protocol::max_page_size); auto md5 = fill_buffer_and_get_md5(big_buffer); auto stream = concurrency::streams::bytestream::open_istream(big_buffer); - azure::storage::page_range range(4 * 1024 * 1024, 4 * 1024 * 1024 + azure::storage::protocol::max_block_size - 1); + azure::storage::page_range range(4 * 1024 * 1024, 4 * 1024 * 1024 + azure::storage::protocol::max_page_size - 1); pages.push_back(range); m_blob.upload_pages(stream, range.start_offset(), md5, azure::storage::access_condition(), options, m_context); CHECK_UTF8_EQUAL(md5, md5_header); } - + { + // upload another page range of max_page_size + std::vector big_buffer; + big_buffer.resize(azure::storage::protocol::max_page_size); + auto crc64 = fill_buffer_and_get_crc64(big_buffer); + uint64_t crc64_val = azure::storage::crc64(big_buffer.data(), big_buffer.size()); + auto stream = concurrency::streams::bytestream::open_istream(big_buffer); + azure::storage::page_range range(12 * 1024 * 1024, 12 * 1024 * 1024 + azure::storage::protocol::max_page_size - 1); + pages.push_back(range); + m_blob.upload_pages(stream, range.start_offset(), crc64_val, azure::storage::access_condition(), options, m_context); + CHECK_UTF8_EQUAL(crc64, crc64_header); + } check_page_ranges_equal(pages); options.set_use_transactional_md5(true); + options.set_use_transactional_crc64(false); { - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); auto stream = concurrency::streams::bytestream::open_istream(buffer); CHECK_THROW(m_blob.upload_pages(stream, 0, dummy_md5, azure::storage::access_condition(), options, m_context), azure::storage::storage_exception); CHECK_UTF8_EQUAL(dummy_md5, md5_header); } + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(true); + { + fill_buffer(buffer); + auto stream = concurrency::streams::bytestream::open_istream(buffer); + CHECK_THROW(m_blob.upload_pages(stream, 0, dummy_crc64_val, azure::storage::access_condition(), options, m_context), azure::storage::storage_exception); + CHECK_UTF8_EQUAL(dummy_crc64, crc64_header); + } - // trying upload page ranges bigger than max_block_size + // trying upload page ranges bigger than max_page_size { - buffer.resize(azure::storage::protocol::max_block_size + 1); - fill_buffer_and_get_md5(buffer); + buffer.resize(azure::storage::protocol::max_page_size + 1); + fill_buffer(buffer); - azure::storage::page_range range(8 * 1024 * 1024, 8 * 1024 * 1024 + azure::storage::protocol::max_block_size -1); + azure::storage::page_range range(20 * 1024 * 1024, 20 * 1024 * 1024 + azure::storage::protocol::max_page_size -1); auto stream = concurrency::streams::bytestream::open_istream(buffer); CHECK_THROW(m_blob.upload_pages(stream, range.start_offset(), utility::string_t(), azure::storage::access_condition(), options, m_context), std::invalid_argument); } - check_page_ranges_equal(pages); m_context.set_sending_request(std::function()); @@ -219,7 +281,7 @@ SUITE(Blob) buffer.resize(16 * 1024); azure::storage::blob_request_options options; - fill_buffer_and_get_md5(buffer); + fill_buffer(buffer); auto stream = concurrency::streams::bytestream::open_istream(buffer); m_blob.upload_from_stream(stream, 0, azure::storage::access_condition(), options, m_context); @@ -242,8 +304,22 @@ SUITE(Blob) { const size_t size = 6 * 1024 * 1024; azure::storage::blob_request_options options; - options.set_use_transactional_md5(true); + options.set_store_blob_content_md5(false); + + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(false); + check_parallelism(upload_and_download(m_blob, size, 0, 0, true, options, 3, false), 1); + m_blob.delete_blob(); + m_blob.properties().set_content_md5(utility::string_t()); + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(true); + check_parallelism(upload_and_download(m_blob, size, 0, 0, true, options, 3, false), 1); + m_blob.delete_blob(); + m_blob.properties().set_content_md5(utility::string_t()); + + options.set_use_transactional_md5(true); + options.set_use_transactional_crc64(false); check_parallelism(upload_and_download(m_blob, size, 0, 0, true, options, 3, false), 1); m_blob.delete_blob(); m_blob.properties().set_content_md5(utility::string_t()); @@ -283,15 +359,24 @@ SUITE(Blob) { const size_t buffer_size = 6 * 1024 * 1024; const size_t blob_size = 4 * 1024 * 1024; - azure::storage::blob_request_options options; - options.set_use_transactional_md5(true); const size_t buffer_offsets[2] = { 0, 1024 }; for (auto buffer_offset : buffer_offsets) { + azure::storage::blob_request_options options; + options.set_stream_write_size_in_bytes(blob_size); options.set_store_blob_content_md5(false); options.set_parallelism_factor(1); + + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(true); + check_parallelism(upload_and_download(m_blob, buffer_size, buffer_offset, blob_size, true, options, 2, false), 1); + m_blob.delete_blob(); + m_blob.properties().set_content_md5(utility::string_t()); + + options.set_use_transactional_md5(true); + options.set_use_transactional_crc64(false); check_parallelism(upload_and_download(m_blob, buffer_size, buffer_offset, blob_size, true, options, 2, false), 1); m_blob.delete_blob(); m_blob.properties().set_content_md5(utility::string_t()); @@ -320,15 +405,32 @@ SUITE(Blob) { const size_t buffer_size = 6 * 1024 * 1024; const size_t blob_size = 4 * 1024 * 1024; - azure::storage::blob_request_options options; - options.set_use_transactional_md5(true); + + const size_t buffer_offsets[2] = { 0, 1024 }; for (auto buffer_offset : buffer_offsets) { + azure::storage::blob_request_options options; + options.set_stream_write_size_in_bytes(blob_size); options.set_store_blob_content_md5(false); options.set_parallelism_factor(1); + + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(false); + check_parallelism(upload_and_download(m_blob, buffer_size, buffer_offset, blob_size, false, options, 2, false), 1); + m_blob.delete_blob(); + m_blob.properties().set_content_md5(utility::string_t()); + + options.set_use_transactional_md5(false); + options.set_use_transactional_crc64(true); + check_parallelism(upload_and_download(m_blob, buffer_size, buffer_offset, blob_size, false, options, 2, false), 1); + m_blob.delete_blob(); + m_blob.properties().set_content_md5(utility::string_t()); + + options.set_use_transactional_md5(true); + options.set_use_transactional_crc64(false); check_parallelism(upload_and_download(m_blob, buffer_size, buffer_offset, blob_size, false, options, 2, false), 1); m_blob.delete_blob(); m_blob.properties().set_content_md5(utility::string_t()); @@ -371,7 +473,7 @@ SUITE(Blob) utility::string_t md5_header; m_context.set_sending_request([&md5_header] (web::http::http_request& request, azure::storage::operation_context) { - if (!request.headers().match(_XPLATSTR("x-ms-blob-content-md5"), md5_header)) + if (!request.headers().match(azure::storage::protocol::ms_header_blob_content_md5, md5_header)) { md5_header.clear(); } @@ -505,7 +607,7 @@ SUITE(Blob) m_context.set_response_received(std::function()); } - TEST_FIXTURE(page_blob_test_base, page_blob_prevsnapshot) + TEST_FIXTURE(page_blob_test_base, page_blob_prevsnapshot_time) { m_blob.create(2048, 0, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); @@ -583,4 +685,1041 @@ SUITE(Blob) CHECK(2047 == diff[0].end_offset()); } } + + //TEST_FIXTURE(page_blob_test_base, page_blob_prevsnapshot_url) + //{ + // auto get_snapshot_url = [](azure::storage::cloud_page_blob& snapshot) + // { + // return snapshot.uri().primary_uri().to_string() + _XPLATSTR("?snapshot=") + snapshot.snapshot_time(); + // }; + + // m_blob.create(2048, 0, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + + // azure::storage::cloud_page_blob snapshot1 = m_blob.create_snapshot(azure::storage::cloud_metadata(), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + // auto diff = m_blob.download_page_ranges_diff_md(get_snapshot_url(snapshot1), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + // CHECK(0 == diff.size()); + + // { + // utility::string_t content(2048, _XPLATSTR('A')); + // auto utf8_body = utility::conversions::to_utf8string(content); + // auto stream = concurrency::streams::bytestream::open_istream(std::move(utf8_body)); + // m_blob.upload_pages(stream, 0, _XPLATSTR("")); + // diff = m_blob.download_page_ranges_diff_md(get_snapshot_url(snapshot1), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + // CHECK(1 == diff.size()); + // CHECK_EQUAL(false, diff[0].is_cleared_rage()); + // CHECK(0 == diff[0].start_offset()); + // CHECK(2047 == diff[0].end_offset()); + // } + + // azure::storage::cloud_page_blob snapshot2 = m_blob.create_snapshot(azure::storage::cloud_metadata(), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + // auto diff2 = snapshot2.download_page_ranges_diff_md(get_snapshot_url(snapshot1), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + // CHECK_EQUAL(false, diff[0].is_cleared_rage()); + // CHECK(0 == diff[0].start_offset()); + // CHECK(2047 == diff[0].end_offset()); + + // { + // utility::string_t content(512, _XPLATSTR('B')); + // auto utf8_body = utility::conversions::to_utf8string(content); + // auto stream = concurrency::streams::bytestream::open_istream(std::move(utf8_body)); + // m_blob.upload_pages(stream, 0, _XPLATSTR("")); + // m_blob.clear_pages(512, 512); + // diff = m_blob.download_page_ranges_diff_md(get_snapshot_url(snapshot2), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + // CHECK(2 == diff.size()); + // if (diff[0].is_cleared_rage() == true) + // { + // auto temp = diff[0]; + // diff[0] = diff[1]; + // diff[1] = temp; + // } + // CHECK_EQUAL(false, diff[0].is_cleared_rage()); + // CHECK(0 == diff[0].start_offset()); + // CHECK(511 == diff[0].end_offset()); + + // CHECK_EQUAL(true, diff[1].is_cleared_rage()); + // CHECK(512 == diff[1].start_offset()); + // CHECK(1023 == diff[1].end_offset()); + // } + + // azure::storage::cloud_page_blob snapshot3 = m_blob.create_snapshot(azure::storage::cloud_metadata(), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + // auto diff3 = snapshot3.download_page_ranges_diff_md(get_snapshot_url(snapshot2), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + // CHECK(2 == diff.size()); + // if (diff[0].is_cleared_rage() == true) + // { + // auto temp = diff[0]; + // diff[0] = diff[1]; + // diff[1] = temp; + // } + // CHECK_EQUAL(false, diff[0].is_cleared_rage()); + // CHECK(0 == diff[0].start_offset()); + // CHECK(511 == diff[0].end_offset()); + + // CHECK_EQUAL(true, diff[1].is_cleared_rage()); + // CHECK(512 == diff[1].start_offset()); + // CHECK(1023 == diff[1].end_offset()); + + // { + // utility::string_t content(2048, _XPLATSTR('A')); + // auto utf8_body = utility::conversions::to_utf8string(content); + // auto stream = concurrency::streams::bytestream::open_istream(std::move(utf8_body)); + // m_blob.upload_pages(stream, 0, _XPLATSTR("")); + // diff = m_blob.download_page_ranges_diff_md(get_snapshot_url(snapshot1), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); + // CHECK(1 == diff.size()); + // CHECK_EQUAL(false, diff[0].is_cleared_rage()); + // CHECK(0 == diff[0].start_offset()); + // CHECK(2047 == diff[0].end_offset()); + // } + //} + + TEST_FIXTURE(page_blob_test_base, page_blob_incremental_copy) + { + // get sas token for test + azure::storage::blob_shared_access_policy policy; + policy.set_permissions(azure::storage::blob_shared_access_policy::permissions::read); + policy.set_start(utility::datetime::utc_now() - utility::datetime::from_minutes(5)); + policy.set_expiry(utility::datetime::utc_now() + utility::datetime::from_minutes(30)); + auto sas_token = m_container.get_shared_access_signature(policy); + + // prepare data + m_blob.metadata()[_XPLATSTR("key1")] = _XPLATSTR("value1"); + m_blob.create(2048); + azure::storage::cloud_page_blob source_snapshot; + auto inc_copy = m_container.get_page_blob_reference(m_blob.name() + _XPLATSTR("_copy")); + + // Scenario: incremental copy to create destination page blob + { + // perform actions + source_snapshot = m_blob.create_snapshot(); + auto source_uri = azure::storage::storage_credentials(sas_token).transform_uri(source_snapshot.snapshot_qualified_uri().primary_uri()); + auto copy_id = inc_copy.start_incremental_copy(source_uri); + auto inc_copy_ref = m_container.get_page_blob_reference(inc_copy.name()); + wait_for_copy(inc_copy_ref); + + // verify copy id is valid but abort copy operation is invalid. + azure::storage::operation_context context; + CHECK_THROW(inc_copy.abort_copy(copy_id, azure::storage::access_condition(), azure::storage::blob_request_options(), context), azure::storage::storage_exception); + CHECK_EQUAL(1u, context.request_results().size()); + CHECK_EQUAL(web::http::status_codes::Conflict, context.request_results().back().http_status_code()); + + // verify incremental copy related properties and metadata. + CHECK_EQUAL(true, inc_copy_ref.properties().is_incremental_copy()); + CHECK(inc_copy_ref.copy_state().destination_snapshot_time().is_initialized()); + CHECK_EQUAL(1u, inc_copy_ref.metadata().size()); + CHECK_UTF8_EQUAL(_XPLATSTR("value1"), inc_copy_ref.metadata()[_XPLATSTR("key1")]); + + // verify destination blob properties retrieved with list blobs and snapshots of destination blobs + auto iter = m_container.list_blobs(inc_copy_ref.name(), true, azure::storage::blob_listing_details::snapshots, 10, azure::storage::blob_request_options(), azure::storage::operation_context()); + auto dest_blobs = transform_if(iter, + [](const azure::storage::list_blob_item& item)->bool { return item.is_blob(); }, + [](const azure::storage::list_blob_item& item)->azure::storage::cloud_blob { return item.as_blob(); }); + CHECK_EQUAL(2u, dest_blobs.size()); + + auto dest_blob_it = std::find_if(dest_blobs.cbegin(), dest_blobs.cend(), [](const azure::storage::cloud_blob& blob)->bool { return blob.snapshot_time().empty(); }); + CHECK(dest_blob_it != dest_blobs.end()); + CHECK_EQUAL(true, dest_blob_it->properties().is_incremental_copy()); + CHECK(dest_blob_it->copy_state().destination_snapshot_time().is_initialized()); + + auto dest_snapshot_it = std::find_if(dest_blobs.begin(), dest_blobs.end(), [](const azure::storage::cloud_blob& blob)->bool { return !blob.snapshot_time().empty(); }); + CHECK(dest_snapshot_it != dest_blobs.end()); + CHECK_EQUAL(true, dest_snapshot_it->properties().is_incremental_copy()); + CHECK(dest_snapshot_it->copy_state().destination_snapshot_time().is_initialized()); + CHECK(dest_blob_it->copy_state().destination_snapshot_time() == parse_datetime(dest_snapshot_it->snapshot_time(), utility::datetime::date_format::ISO_8601)); + + // verify readability of destination snapshot + concurrency::streams::container_buffer> buff; + CHECK_NOTHROW(dest_snapshot_it->download_to_stream(buff.create_ostream())); + } + + // Scenario: incremental copy new snapshot to destination blob + { + // make some changes on source + utility::string_t content(2048, _XPLATSTR('A')); + auto utf8_body = utility::conversions::to_utf8string(content); + auto stream = concurrency::streams::bytestream::open_istream(std::move(utf8_body)); + m_blob.upload_pages(stream, 0, _XPLATSTR("")); + + // create new snapshot of source and incremental copy once again. + source_snapshot = m_blob.create_snapshot(); + auto source_uri = azure::storage::storage_credentials(sas_token).transform_uri(source_snapshot.snapshot_qualified_uri().primary_uri()); + inc_copy.start_incremental_copy(source_uri); + auto inc_copy_ref = m_container.get_page_blob_reference(inc_copy.name()); + wait_for_copy(inc_copy_ref); + + // verify incremental copy related properties and metadata. + CHECK_EQUAL(true, inc_copy_ref.properties().is_incremental_copy()); + CHECK(inc_copy_ref.copy_state().destination_snapshot_time().is_initialized()); + CHECK_EQUAL(1u, inc_copy_ref.metadata().size()); + CHECK_UTF8_EQUAL(_XPLATSTR("value1"), inc_copy_ref.metadata()[_XPLATSTR("key1")]); + + // verify snapshots of destination blobs + auto iter = m_container.list_blobs(inc_copy_ref.name(), true, azure::storage::blob_listing_details::snapshots, 10, azure::storage::blob_request_options(), azure::storage::operation_context()); + auto dest_blobs = transform_if(iter, + [](const azure::storage::list_blob_item& item)->bool { return item.is_blob(); }, + [](const azure::storage::list_blob_item& item)->azure::storage::cloud_blob { return item.as_blob(); }); + CHECK_EQUAL(3u, dest_blobs.size()); + CHECK_EQUAL(2, std::count_if(dest_blobs.begin(), dest_blobs.end(), [](const azure::storage::cloud_blob& b) -> bool { return !b.snapshot_time().empty(); })); + std::sort(dest_blobs.begin(), dest_blobs.end(), [](const azure::storage::cloud_blob& l, const azure::storage::cloud_blob& r) -> bool + { + return parse_datetime(l.snapshot_time(), utility::datetime::date_format::ISO_8601).to_interval() < + parse_datetime(r.snapshot_time(), utility::datetime::date_format::ISO_8601).to_interval(); + }); + CHECK(inc_copy_ref.copy_state().destination_snapshot_time() == parse_datetime(dest_blobs.back().snapshot_time(), utility::datetime::date_format::ISO_8601)); + } + + // Scenario: delete destination snapshot and perform incremental copy again + { + // verify the scenario + CHECK_NOTHROW(inc_copy.delete_blob(azure::storage::delete_snapshots_option::delete_snapshots_only, azure::storage::access_condition(), azure::storage::blob_request_options(), azure::storage::operation_context())); + auto source_uri = azure::storage::storage_credentials(sas_token).transform_uri(source_snapshot.snapshot_qualified_uri().primary_uri()); + CHECK_NOTHROW(inc_copy.start_incremental_copy(source_uri)); + auto inc_copy_ref = m_container.get_page_blob_reference(inc_copy.name()); + wait_for_copy(inc_copy_ref); + + // verify snapshots of destination blob + auto iter = m_container.list_blobs(inc_copy_ref.name(), true, azure::storage::blob_listing_details::snapshots, 10, azure::storage::blob_request_options(), azure::storage::operation_context()); + auto dest_blobs = transform_if(iter, + [](const azure::storage::list_blob_item& item)->bool { return item.is_blob(); }, + [](const azure::storage::list_blob_item& item)->azure::storage::cloud_blob { return item.as_blob(); }); + CHECK_EQUAL(2u, dest_blobs.size()); + } + + // Misc. verifications + { + azure::storage::operation_context context; + + // verify incremental copy same snapshot + auto source_uri = azure::storage::storage_credentials(sas_token).transform_uri(source_snapshot.snapshot_qualified_uri().primary_uri()); + CHECK_THROW(inc_copy.start_incremental_copy(source_uri, azure::storage::access_condition(), azure::storage::blob_request_options(), context), azure::storage::storage_exception); + CHECK_EQUAL(1u, context.request_results().size()); + CHECK_EQUAL(web::http::status_codes::Conflict, context.request_results().back().http_status_code()); + + // verify readability of destination blob + concurrency::streams::container_buffer> buff; + CHECK_THROW(inc_copy.download_to_stream(buff.create_ostream(), azure::storage::access_condition(), azure::storage::blob_request_options(), context), azure::storage::storage_exception); + CHECK_EQUAL(2u, context.request_results().size()); + CHECK_EQUAL(web::http::status_codes::Conflict, context.request_results().back().http_status_code()); + + // verify deletion of destination blob + CHECK_NOTHROW(inc_copy.delete_blob(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), azure::storage::blob_request_options(), context)); + } + } + + // Validate set standard blob tier for block blob on blob storage account. + TEST_FIXTURE(premium_page_blob_test_base, page_blob_premium_tier) + { + // preparation + azure::storage::blob_request_options options; + // check default tier is p10 + m_blob.create(1024); + m_blob.download_attributes(); + CHECK(azure::storage::premium_blob_tier::p10 == m_blob.properties().premium_blob_tier()); + + // check create page blob sets the tier to be p20 + m_blob.create(1024, azure::storage::premium_blob_tier::p20, 0, azure::storage::access_condition(), options, azure::storage::operation_context()); + CHECK(azure::storage::premium_blob_tier::p20 == m_blob.properties().premium_blob_tier()); + m_blob.download_attributes(); + CHECK(azure::storage::premium_blob_tier::p20 == m_blob.properties().premium_blob_tier()); + + // test can convert p20 to p30, p30 to p40. + m_blob.set_premium_blob_tier(azure::storage::premium_blob_tier::p30, azure::storage::access_condition(), options, azure::storage::operation_context()); + // validate local has been updated. + CHECK(azure::storage::premium_blob_tier::p30 == m_blob.properties().premium_blob_tier()); + // validate server has been updated + m_blob.download_attributes(); + CHECK(azure::storage::premium_blob_tier::p30 == m_blob.properties().premium_blob_tier()); + m_blob.set_premium_blob_tier(azure::storage::premium_blob_tier::p40, azure::storage::access_condition(), options, azure::storage::operation_context()); + // validate local has been updated. + CHECK(azure::storage::premium_blob_tier::p40 == m_blob.properties().premium_blob_tier()); + // validate server has been updated + m_blob.download_attributes(); + CHECK(azure::storage::premium_blob_tier::p40 == m_blob.properties().premium_blob_tier()); + } + + TEST_FIXTURE(premium_page_blob_test_base, set_blob_tier_with_lease) + { + // preparation + azure::storage::blob_request_options options; + // check default tier is p10 + m_blob.create(1024); + m_blob.download_attributes(); + CHECK(azure::storage::premium_blob_tier::p10 == m_blob.properties().premium_blob_tier()); + + // acquire a lease + auto lease_id = m_blob.acquire_lease(azure::storage::lease_time(), _XPLATSTR("")); + + // set the acquired lease to access condition. + azure::storage::access_condition condition; + condition.set_lease_id(lease_id); + + // check create page blob sets the tier to be p20 + m_blob.create(1024, azure::storage::premium_blob_tier::p20, 0, condition, options, azure::storage::operation_context()); + CHECK(azure::storage::premium_blob_tier::p20 == m_blob.properties().premium_blob_tier()); + m_blob.download_attributes(); + CHECK(azure::storage::premium_blob_tier::p20 == m_blob.properties().premium_blob_tier()); + + // test can convert p20 to p30, p30 to p40. + m_blob.set_premium_blob_tier(azure::storage::premium_blob_tier::p30, condition, options, azure::storage::operation_context()); + // validate local has been updated. + CHECK(azure::storage::premium_blob_tier::p30 == m_blob.properties().premium_blob_tier()); + // validate server has been updated + m_blob.download_attributes(); + CHECK(azure::storage::premium_blob_tier::p30 == m_blob.properties().premium_blob_tier()); + m_blob.set_premium_blob_tier(azure::storage::premium_blob_tier::p40, condition, options, azure::storage::operation_context()); + // validate local has been updated. + CHECK(azure::storage::premium_blob_tier::p40 == m_blob.properties().premium_blob_tier()); + // validate server has been updated + m_blob.download_attributes(); + CHECK(azure::storage::premium_blob_tier::p40 == m_blob.properties().premium_blob_tier()); + + // validate no lease id would report failure. + CHECK_STORAGE_EXCEPTION(m_blob.set_premium_blob_tier(azure::storage::premium_blob_tier::p30, azure::storage::access_condition(), options, azure::storage::operation_context()), ACTIVE_LEASE_ERROR_MESSAGE); + CHECK(azure::storage::premium_blob_tier::p40 == m_blob.properties().premium_blob_tier()); + } + + TEST_FIXTURE(page_blob_test_base, page_blob_create_delete_cancellation) + { + + { + // cancel the cancellation prior to the operation + auto cancel_token_src = pplx::cancellation_token_source(); + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.create_async(1024, azure::storage::premium_blob_tier::unknown, 0, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + // cancel the cancellation during the operation + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.create_async(1024, azure::storage::premium_blob_tier::unknown, 0, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(3)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.create_async(1024, azure::storage::premium_blob_tier::unknown, 0, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + task_result.get(); + // cancel the cancellation after the operation + cancel_token_src.cancel(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + throw; + } + + CHECK_EQUAL("", ex_msg); + } + + { + // cancel the cancellation prior to the operation + auto cancel_token_src = pplx::cancellation_token_source(); + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + // cancel the cancellation during the operation + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(3)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + task_result.get(); + // cancel the cancellation after the operation + cancel_token_src.cancel(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + + } + + TEST_FIXTURE(page_blob_test_base, page_blob_create_delete_timeout) + { + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(1)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.create_async(1024, 0, azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(3)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.create_async(1024, 0, azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(10000)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.create_async(1024, 0, azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(1)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(3)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(10000)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.delete_blob_if_exists_async(azure::storage::delete_snapshots_option::include_snapshots, azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + } + + TEST_FIXTURE(page_blob_test_base, page_blob_create_cancellation_timeout) + { + { + //when cancellation first + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(100)); + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.create_async(1024, azure::storage::premium_blob_tier::unknown, 0, azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + cancel_token_src.cancel(); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + //when timeout first + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(10)); + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.create_async(1024, azure::storage::premium_blob_tier::unknown, 0, azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(30)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + task_result.get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + } + + TEST_FIXTURE(page_blob_test_base, page_blob_open_read_write_cancellation) + { + std::vector buffer; + buffer.resize(4 * 1024 * 1024); + fill_buffer(buffer); + m_blob.create(buffer.size()); + + { + // cancel the cancellation prior to the operation + auto cancel_token_src = pplx::cancellation_token_source(); + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + // cancel the cancellation prior to the operation and write to a canceled ostream. + auto cancel_token_src = pplx::cancellation_token_source(); + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + // cancel the cancellation during the operation + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + std::this_thread::sleep_for(std::chrono::milliseconds(10)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + // cancel the cancellation after the operation + cancel_token_src.cancel(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + + m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(buffer)); + + { + auto cancel_token_src = pplx::cancellation_token_source(); + // cancel the cancellation prior to the operation + cancel_token_src.cancel(); + + std::string ex_msg; + + + try + { + auto task_result = m_blob.open_read_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + auto is = task_result.get(); + is.read().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + // cancel the cancellation during the operation + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_read_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(10)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + auto is = task_result.get(); + is.read().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_read_async(azure::storage::access_condition(), azure::storage::blob_request_options(), m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(10)); //sleep for sometime before canceling the request and see result. + auto is = task_result.get(); + is.read().get(); + // cancel the cancellation after the operation + cancel_token_src.cancel(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + + } + + TEST_FIXTURE(page_blob_test_base, page_blob_open_read_write_timeout) + { + std::vector buffer; + buffer.resize(4 * 1024 * 1024); + fill_buffer(buffer); + m_blob.create(buffer.size()); + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(1)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(azure::storage::access_condition(), options, m_context); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(20)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(azure::storage::access_condition(), options, m_context); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::seconds(30)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(azure::storage::access_condition(), options, m_context); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + + m_blob.upload_from_stream(concurrency::streams::bytestream::open_istream(buffer)); + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(1)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_read_async(azure::storage::access_condition(), options, m_context); + auto is = task_result.get(); + is.read().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(20)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_read_async(azure::storage::access_condition(), options, m_context); + auto is = task_result.get(); + is.read().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::seconds(30)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_read_async(azure::storage::access_condition(), options, m_context); + auto is = task_result.get(); + is.read().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("", ex_msg); + } + } + + TEST_FIXTURE(page_blob_test_base, page_blob_open_read_write_cancellation_timeout) + { + std::vector buffer; + buffer.resize(4 * 1024 * 1024); + fill_buffer(buffer); + m_blob.create(1024); + { + //when cancellation first + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(100)); + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + auto os = task_result.get(); + cancel_token_src.cancel(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + //when timeout first + auto options = azure::storage::blob_request_options(); + options.set_maximum_execution_time(std::chrono::milliseconds(10)); + auto cancel_token_src = pplx::cancellation_token_source(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.open_write_async(azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + auto os = task_result.get(); + os.streambuf().putn_nocopy(buffer.data(), buffer.size()).wait(); + std::this_thread::sleep_for(std::chrono::milliseconds(30)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + os.close().get(); + } + catch (std::exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + } + + TEST_FIXTURE(page_blob_test_base, page_blob_concurrent_upload_cancellation_timeout) + { + utility::size64_t length = 260 * 1024 * 1024; + std::vector buffer; + buffer.resize(length); + fill_buffer(buffer); + + { + auto cancel_token_src = pplx::cancellation_token_source(); + auto options = azure::storage::blob_request_options(); + options.set_parallelism_factor(4); + options.set_maximum_execution_time(std::chrono::milliseconds(1000)); + // cancel the cancellation prior to the operation + cancel_token_src.cancel(); + + std::string ex_msg; + + try + { + auto task_result = m_blob.upload_from_stream_async(concurrency::streams::bytestream::open_istream(buffer), length, 0, azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto cancel_token_src = pplx::cancellation_token_source(); + auto options = azure::storage::blob_request_options(); + options.set_parallelism_factor(4); + options.set_maximum_execution_time(std::chrono::milliseconds(1000)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.upload_from_stream_async(concurrency::streams::bytestream::open_istream(buffer), length, 0, azure::storage::access_condition(), options, m_context, cancel_token_src.get_token()); + std::this_thread::sleep_for(std::chrono::milliseconds(300)); //sleep for sometime before canceling the request and see result. + cancel_token_src.cancel(); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL(OPERATION_CANCELED, ex_msg); + } + + { + auto options = azure::storage::blob_request_options(); + options.set_parallelism_factor(4); + options.set_maximum_execution_time(std::chrono::milliseconds(1000)); + + std::string ex_msg; + + try + { + auto task_result = m_blob.upload_from_stream_async(concurrency::streams::bytestream::open_istream(buffer), length, 0, azure::storage::access_condition(), options, m_context); + task_result.get(); + } + catch (azure::storage::storage_exception& e) + { + ex_msg = std::string(e.what()); + } + + CHECK_EQUAL("The client could not finish the operation within specified timeout.", ex_msg); + } + } + + TEST_FIXTURE(premium_page_blob_test_base, page_blob_cpkv) + { + utility::size64_t length = 128 * 1024; + std::vector buffer(length); + fill_buffer(buffer); + auto empty_options = azure::storage::blob_request_options(); + auto cpk_options = azure::storage::blob_request_options(); + std::vector key(32); + fill_buffer(key); + cpk_options.set_encryption_key(key); + + m_blob.create(length, 0, azure::storage::access_condition(), cpk_options, m_context); + + CHECK_THROW(m_blob.exists(empty_options, m_context), azure::storage::storage_exception); + m_blob.exists(cpk_options, m_context); + CHECK_THROW(m_blob.upload_pages(concurrency::streams::bytestream::open_istream(buffer), 0, azure::storage::checksum_none, azure::storage::access_condition(), empty_options, m_context), azure::storage::storage_exception); + m_blob.upload_pages(concurrency::streams::bytestream::open_istream(buffer), 0, azure::storage::checksum_none, azure::storage::access_condition(), cpk_options, m_context); + CHECK_THROW(m_blob.set_premium_blob_tier(azure::storage::premium_blob_tier::p4, azure::storage::access_condition(), empty_options, m_context), azure::storage::storage_exception); + CHECK_THROW(m_blob.set_premium_blob_tier(azure::storage::premium_blob_tier::p4, azure::storage::access_condition(), cpk_options, m_context), azure::storage::storage_exception); + m_blob.resize(length * 2, azure::storage::access_condition(), empty_options, m_context); + m_blob.download_page_ranges(azure::storage::access_condition(), empty_options, m_context); + m_blob.set_sequence_number(azure::storage::sequence_number::increment(), azure::storage::access_condition(), empty_options, m_context); + } } diff --git a/Microsoft.WindowsAzure.Storage/tests/cloud_queue_test.cpp b/Microsoft.WindowsAzure.Storage/tests/cloud_queue_test.cpp index 69896e68..808c2e86 100644 --- a/Microsoft.WindowsAzure.Storage/tests/cloud_queue_test.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/cloud_queue_test.cpp @@ -110,7 +110,7 @@ SUITE(Queue) CHECK(message.pop_receipt().empty()); CHECK(!message.expiration_time().is_initialized()); CHECK(!message.insertion_time().is_initialized()); - CHECK(!message.next_visibile_time().is_initialized()); + CHECK(!message.next_visible_time().is_initialized()); CHECK_EQUAL(0, message.dequeue_count()); } @@ -126,7 +126,7 @@ SUITE(Queue) CHECK(message.pop_receipt().empty()); CHECK(!message.expiration_time().is_initialized()); CHECK(!message.insertion_time().is_initialized()); - CHECK(!message.next_visibile_time().is_initialized()); + CHECK(!message.next_visible_time().is_initialized()); CHECK_EQUAL(0, message.dequeue_count()); content = get_random_string(); @@ -138,7 +138,7 @@ SUITE(Queue) CHECK(message.pop_receipt().empty()); CHECK(!message.expiration_time().is_initialized()); CHECK(!message.insertion_time().is_initialized()); - CHECK(!message.next_visibile_time().is_initialized()); + CHECK(!message.next_visible_time().is_initialized()); CHECK_EQUAL(0, message.dequeue_count()); } @@ -154,7 +154,7 @@ SUITE(Queue) CHECK(message.pop_receipt().empty()); CHECK(!message.expiration_time().is_initialized()); CHECK(!message.insertion_time().is_initialized()); - CHECK(!message.next_visibile_time().is_initialized()); + CHECK(!message.next_visible_time().is_initialized()); CHECK_EQUAL(0, message.dequeue_count()); content = get_random_binary_data(); @@ -166,7 +166,7 @@ SUITE(Queue) CHECK(message.pop_receipt().empty()); CHECK(!message.expiration_time().is_initialized()); CHECK(!message.insertion_time().is_initialized()); - CHECK(!message.next_visibile_time().is_initialized()); + CHECK(!message.next_visible_time().is_initialized()); CHECK_EQUAL(0, message.dequeue_count()); } @@ -183,7 +183,7 @@ SUITE(Queue) CHECK(pop_receipt.compare(message.pop_receipt()) == 0); CHECK(!message.expiration_time().is_initialized()); CHECK(!message.insertion_time().is_initialized()); - CHECK(!message.next_visibile_time().is_initialized()); + CHECK(!message.next_visible_time().is_initialized()); CHECK_EQUAL(0, message.dequeue_count()); } @@ -777,6 +777,123 @@ SUITE(Queue) CHECK(!context.request_results()[0].extended_error().message().empty()); } + TEST_FIXTURE(queue_service_test_base, Queue_UpdateAndDelQueuedMessage) + { + azure::storage::cloud_queue queue = get_queue(); + utility::string_t content = get_random_string(); + + { + azure::storage::queue_request_options options; + azure::storage::cloud_queue_message message; + azure::storage::operation_context context; + print_client_request_id(context, _XPLATSTR("")); + + message.set_content(content); + + queue.add_message(message, std::chrono::seconds(15 * 60), std::chrono::seconds(0), options, context); + + CHECK(!message.id().empty()); + CHECK(!message.pop_receipt().empty()); + CHECK(message.insertion_time().is_initialized()); + CHECK(message.expiration_time().is_initialized()); + CHECK(message.next_visible_time().is_initialized()); + + utility::string_t old_pop_recepit = message.pop_receipt(); + utility::datetime old_next_visible_time = message.next_visible_time(); + message.set_content(get_random_string()); + queue.update_message(message, std::chrono::seconds(15 * 60), true, options, context); + + CHECK(old_pop_recepit.compare(message.pop_receipt()) != 0); + CHECK(old_next_visible_time != message.next_visible_time()); + + CHECK(!context.client_request_id().empty()); + CHECK(context.start_time().is_initialized()); + CHECK(context.end_time().is_initialized()); + CHECK_EQUAL(2U, context.request_results().size()); + CHECK(context.request_results()[1].is_response_available()); + CHECK(context.request_results()[1].start_time().is_initialized()); + CHECK(context.request_results()[1].end_time().is_initialized()); + CHECK(context.request_results()[1].target_location() != azure::storage::storage_location::unspecified); + CHECK_EQUAL(web::http::status_codes::NoContent, context.request_results()[1].http_status_code()); + CHECK(!context.request_results()[1].service_request_id().empty()); + CHECK(context.request_results()[1].request_date().is_initialized()); + CHECK(context.request_results()[1].content_md5().empty()); + CHECK(context.request_results()[1].etag().empty()); + CHECK(context.request_results()[1].extended_error().code().empty()); + CHECK(context.request_results()[1].extended_error().message().empty()); + CHECK(context.request_results()[1].extended_error().details().empty()); + } + + { + azure::storage::queue_request_options options; + azure::storage::cloud_queue_message message; + azure::storage::operation_context context; + print_client_request_id(context, _XPLATSTR("")); + + message.set_content(content); + + queue.add_message(message, std::chrono::seconds(15 * 60), std::chrono::seconds(0), options, context); + queue.delete_message(message, options, context); + + CHECK(!context.client_request_id().empty()); + CHECK(context.start_time().is_initialized()); + CHECK(context.end_time().is_initialized()); + CHECK_EQUAL(2U, context.request_results().size()); + CHECK(context.request_results()[1].is_response_available()); + CHECK(context.request_results()[1].start_time().is_initialized()); + CHECK(context.request_results()[1].end_time().is_initialized()); + CHECK(context.request_results()[1].target_location() != azure::storage::storage_location::unspecified); + CHECK_EQUAL(web::http::status_codes::NoContent, context.request_results()[1].http_status_code()); + CHECK(!context.request_results()[1].service_request_id().empty()); + CHECK(context.request_results()[1].request_date().is_initialized()); + CHECK(context.request_results()[1].content_md5().empty()); + CHECK(context.request_results()[1].etag().empty()); + CHECK(context.request_results()[1].extended_error().code().empty()); + CHECK(context.request_results()[1].extended_error().message().empty()); + CHECK(context.request_results()[1].extended_error().details().empty()); + } + + queue.delete_queue(); + } + + TEST_FIXTURE(queue_service_test_base, Queue_Special_Message_TTL) + { + azure::storage::cloud_queue queue = get_queue(); + { + utility::string_t content = get_random_string(); + azure::storage::cloud_queue_message message; + std::chrono::seconds time_to_live; + std::chrono::seconds initial_visibility_timeout; + azure::storage::queue_request_options options; + azure::storage::operation_context context; + print_client_request_id(context, _XPLATSTR("")); + message.set_content(content); + time_to_live = std::chrono::seconds(-1); + initial_visibility_timeout = std::chrono::seconds(0); + queue.add_message(message, time_to_live, initial_visibility_timeout, options, context); + std::vector messages = queue.get_messages(1U, initial_visibility_timeout, options, context); + CHECK(content == messages[0].content_as_string()); + CHECK(messages[0].expiration_time().is_initialized()); + } + + { + utility::string_t content = get_random_string(); + azure::storage::cloud_queue_message message; + std::chrono::seconds time_to_live; + std::chrono::seconds initial_visibility_timeout; + azure::storage::queue_request_options options; + azure::storage::operation_context context; + print_client_request_id(context, _XPLATSTR("")); + message.set_content(content); + time_to_live = std::chrono::seconds(7 * 24 * 60 * 60 + 1); + initial_visibility_timeout = std::chrono::seconds(0); + queue.add_message(message, time_to_live, initial_visibility_timeout, options, context); + std::vector messages = queue.get_messages(1U, initial_visibility_timeout, options, context); + CHECK(content == messages[0].content_as_string()); + CHECK(messages[0].expiration_time().is_initialized()); + } + } + TEST_FIXTURE(queue_service_test_base, Queue_Messages) { azure::storage::cloud_queue queue = get_queue(); @@ -893,7 +1010,7 @@ SUITE(Queue) CHECK(!message.pop_receipt().empty()); CHECK(message.insertion_time().is_initialized()); CHECK(message.expiration_time().is_initialized()); - CHECK(message.next_visibile_time().is_initialized()); + CHECK(message.next_visible_time().is_initialized()); } CHECK(!context.client_request_id().empty()); @@ -918,7 +1035,7 @@ SUITE(Queue) { utility::string_t old_pop_recepit = message1.pop_receipt(); - utility::datetime old_next_visible_time = message1.next_visibile_time(); + utility::datetime old_next_visible_time = message1.next_visible_time(); std::chrono::seconds visibility_timeout; bool update_content; @@ -936,10 +1053,10 @@ SUITE(Queue) CHECK(!message1.pop_receipt().empty()); CHECK(message1.insertion_time().is_initialized()); CHECK(message1.expiration_time().is_initialized()); - CHECK(message1.next_visibile_time().is_initialized()); + CHECK(message1.next_visible_time().is_initialized()); CHECK(old_pop_recepit.compare(message1.pop_receipt()) != 0); - CHECK(old_next_visible_time != message1.next_visibile_time()); + CHECK(old_next_visible_time != message1.next_visible_time()); CHECK(!context.client_request_id().empty()); CHECK(context.start_time().is_initialized()); @@ -980,7 +1097,7 @@ SUITE(Queue) CHECK(message.pop_receipt().empty()); CHECK(message.insertion_time().is_initialized()); CHECK(message.expiration_time().is_initialized()); - CHECK(!message.next_visibile_time().is_initialized()); + CHECK(!message.next_visible_time().is_initialized()); } CHECK(!context.client_request_id().empty()); @@ -1013,7 +1130,7 @@ SUITE(Queue) CHECK(!message2.pop_receipt().empty()); CHECK(message2.insertion_time().is_initialized()); CHECK(message2.expiration_time().is_initialized()); - CHECK(message2.next_visibile_time().is_initialized()); + CHECK(message2.next_visible_time().is_initialized()); CHECK(!context.client_request_id().empty()); CHECK(context.start_time().is_initialized()); @@ -1045,7 +1162,7 @@ SUITE(Queue) CHECK(message.pop_receipt().empty()); CHECK(message.insertion_time().is_initialized()); CHECK(message.expiration_time().is_initialized()); - CHECK(!message.next_visibile_time().is_initialized()); + CHECK(!message.next_visible_time().is_initialized()); CHECK(!context.client_request_id().empty()); CHECK(context.start_time().is_initialized()); @@ -1080,7 +1197,7 @@ SUITE(Queue) CHECK(!message3.pop_receipt().empty()); CHECK(message3.insertion_time().is_initialized()); CHECK(message3.expiration_time().is_initialized()); - CHECK(message3.next_visibile_time().is_initialized()); + CHECK(message3.next_visible_time().is_initialized()); CHECK(!context.client_request_id().empty()); CHECK(context.start_time().is_initialized()); @@ -1112,7 +1229,7 @@ SUITE(Queue) CHECK(message.pop_receipt().empty()); CHECK(!message.insertion_time().is_initialized()); CHECK(!message.expiration_time().is_initialized()); - CHECK(!message.next_visibile_time().is_initialized()); + CHECK(!message.next_visible_time().is_initialized()); CHECK(!context.client_request_id().empty()); CHECK(context.start_time().is_initialized()); @@ -1147,7 +1264,7 @@ SUITE(Queue) CHECK(message.pop_receipt().empty()); CHECK(!message.insertion_time().is_initialized()); CHECK(!message.expiration_time().is_initialized()); - CHECK(!message.next_visibile_time().is_initialized()); + CHECK(!message.next_visible_time().is_initialized()); CHECK(!context.client_request_id().empty()); CHECK(context.start_time().is_initialized()); @@ -1172,7 +1289,7 @@ SUITE(Queue) { new_content = get_random_string(); utility::string_t old_pop_recepit = message3.pop_receipt(); - utility::datetime old_next_visible_time = message3.next_visibile_time(); + utility::datetime old_next_visible_time = message3.next_visible_time(); std::chrono::seconds visibility_timeout; bool update_content; @@ -1191,10 +1308,10 @@ SUITE(Queue) CHECK(!message3.pop_receipt().empty()); CHECK(message3.insertion_time().is_initialized()); CHECK(message3.expiration_time().is_initialized()); - CHECK(message3.next_visibile_time().is_initialized()); + CHECK(message3.next_visible_time().is_initialized()); CHECK(old_pop_recepit.compare(message3.pop_receipt()) != 0); - CHECK(old_next_visible_time != message3.next_visibile_time()); + CHECK(old_next_visible_time != message3.next_visible_time()); CHECK(!context.client_request_id().empty()); CHECK(context.start_time().is_initialized()); @@ -1226,7 +1343,7 @@ SUITE(Queue) CHECK(message.pop_receipt().empty()); CHECK(message.insertion_time().is_initialized()); CHECK(message.expiration_time().is_initialized()); - CHECK(!message.next_visibile_time().is_initialized()); + CHECK(!message.next_visible_time().is_initialized()); CHECK(!context.client_request_id().empty()); CHECK(context.start_time().is_initialized()); @@ -1266,7 +1383,7 @@ SUITE(Queue) CHECK(!message3.pop_receipt().empty()); CHECK(message3.insertion_time().is_initialized()); CHECK(message3.expiration_time().is_initialized()); - CHECK(message3.next_visibile_time().is_initialized()); + CHECK(message3.next_visible_time().is_initialized()); CHECK(!context.client_request_id().empty()); CHECK(context.start_time().is_initialized()); @@ -1298,7 +1415,7 @@ SUITE(Queue) CHECK(message.pop_receipt().empty()); CHECK(message.insertion_time().is_initialized()); CHECK(message.expiration_time().is_initialized()); - CHECK(!message.next_visibile_time().is_initialized()); + CHECK(!message.next_visible_time().is_initialized()); CHECK(!context.client_request_id().empty()); CHECK(context.start_time().is_initialized()); @@ -1355,7 +1472,7 @@ SUITE(Queue) CHECK(message.pop_receipt().empty()); CHECK(!message.insertion_time().is_initialized()); CHECK(!message.expiration_time().is_initialized()); - CHECK(!message.next_visibile_time().is_initialized()); + CHECK(!message.next_visible_time().is_initialized()); CHECK(!context.client_request_id().empty()); CHECK(context.start_time().is_initialized()); @@ -1394,7 +1511,7 @@ SUITE(Queue) std::chrono::seconds initial_visibility_timeout; message.set_content(content); - time_to_live = std::chrono::seconds(-1); + time_to_live = std::chrono::seconds(-2); initial_visibility_timeout = std::chrono::seconds(0); CHECK_THROW(queue.add_message(message, time_to_live, initial_visibility_timeout, options, context), std::invalid_argument); @@ -1412,18 +1529,6 @@ SUITE(Queue) CHECK_THROW(queue.add_message(message, time_to_live, initial_visibility_timeout, options, context), std::invalid_argument); } - { - azure::storage::cloud_queue_message message; - std::chrono::seconds time_to_live; - std::chrono::seconds initial_visibility_timeout; - - message.set_content(content); - time_to_live = std::chrono::seconds(30 * 24 * 60 * 60); - initial_visibility_timeout = std::chrono::seconds(0); - - CHECK_THROW(queue.add_message(message, time_to_live, initial_visibility_timeout, options, context), std::invalid_argument); - } - { azure::storage::cloud_queue_message message; std::chrono::seconds time_to_live; @@ -1764,10 +1869,10 @@ SUITE(Queue) CHECK(!message2.pop_receipt().empty()); CHECK(message2.insertion_time().is_initialized()); CHECK(message2.expiration_time().is_initialized()); - CHECK(message2.next_visibile_time().is_initialized()); + CHECK(message2.next_visible_time().is_initialized()); utility::string_t old_pop_recepit = message2.pop_receipt(); - utility::datetime old_next_visible_time = message2.next_visibile_time(); + utility::datetime old_next_visible_time = message2.next_visible_time(); UNREFERENCED_PARAMETER(old_pop_recepit); UNREFERENCED_PARAMETER(old_next_visible_time); diff --git a/Microsoft.WindowsAzure.Storage/tests/cloud_storage_account_test.cpp b/Microsoft.WindowsAzure.Storage/tests/cloud_storage_account_test.cpp index 8755f1a1..e5e73ab3 100644 --- a/Microsoft.WindowsAzure.Storage/tests/cloud_storage_account_test.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/cloud_storage_account_test.cpp @@ -17,6 +17,9 @@ #include "stdafx.h" +#include +#include + #include "test_base.h" #include "check_macros.h" #include "was/storage_account.h" @@ -39,7 +42,9 @@ void check_credentials_equal(const azure::storage::storage_credentials& a, const CHECK_EQUAL(a.is_sas(), b.is_sas()); CHECK_EQUAL(a.is_shared_key(), b.is_shared_key()); CHECK_UTF8_EQUAL(a.account_name(), b.account_name()); - CHECK_UTF8_EQUAL(utility::conversions::to_base64(a.account_key()), utility::conversions::to_base64(b.account_key())); + if (a.is_shared_key() && b.is_shared_key()) { + CHECK_UTF8_EQUAL(utility::conversions::to_base64(a.account_key()), utility::conversions::to_base64(b.account_key())); + } } void check_account_equal(azure::storage::cloud_storage_account& a, azure::storage::cloud_storage_account& b) @@ -454,6 +459,7 @@ SUITE(Core) CHECK_EQUAL(true, creds.is_anonymous()); CHECK_EQUAL(false, creds.is_sas()); CHECK_EQUAL(false, creds.is_shared_key()); + CHECK_EQUAL(false, creds.is_bearer_token()); web::http::uri uri(test_uri); CHECK_UTF8_EQUAL(test_uri, creds.transform_uri(uri).to_string()); @@ -468,6 +474,7 @@ SUITE(Core) CHECK_EQUAL(false, creds.is_anonymous()); CHECK_EQUAL(false, creds.is_sas()); CHECK_EQUAL(true, creds.is_shared_key()); + CHECK_EQUAL(false, creds.is_bearer_token()); web::http::uri uri(test_uri); CHECK_UTF8_EQUAL(test_uri, creds.transform_uri(uri).to_string()); @@ -482,10 +489,11 @@ SUITE(Core) CHECK_UTF8_EQUAL(token, creds.sas_token()); CHECK(creds.account_name().empty()); - CHECK(creds.account_key().empty()); + CHECK(!creds.is_account_key()); CHECK(!creds.is_anonymous()); CHECK(creds.is_sas()); CHECK(!creds.is_shared_key()); + CHECK(!creds.is_bearer_token()); web::http::uri uri(test_uri); CHECK_UTF8_EQUAL(test_uri + _XPLATSTR("?") + token_with_api_version, creds.transform_uri(uri).to_string()); @@ -496,35 +504,98 @@ SUITE(Core) CHECK_UTF8_EQUAL(token, creds.sas_token()); CHECK(creds.account_name().empty()); - CHECK(creds.account_key().empty()); + CHECK(!creds.is_account_key()); CHECK(!creds.is_anonymous()); CHECK(creds.is_sas()); CHECK(!creds.is_shared_key()); + CHECK(!creds.is_bearer_token()); web::http::uri uri(test_uri); CHECK_UTF8_EQUAL(test_uri + _XPLATSTR("?") + token_with_api_version, creds.transform_uri(uri).to_string()); } } - TEST_FIXTURE(test_base, storage_credentials_empty_key) - { - const utility::string_t defaults_connection_string(_XPLATSTR("DefaultEndpointsProtocol=https;AccountName=") + test_account_name + _XPLATSTR(";AccountKey=")); - - azure::storage::storage_credentials creds(test_account_name, utility::string_t()); - CHECK(creds.sas_token().empty()); - CHECK_UTF8_EQUAL(test_account_name, creds.account_name()); - CHECK_EQUAL(false, creds.is_anonymous()); - CHECK_EQUAL(false, creds.is_sas()); - CHECK_EQUAL(true, creds.is_shared_key()); - CHECK_UTF8_EQUAL(utility::string_t(), utility::conversions::to_base64(creds.account_key())); - - azure::storage::cloud_storage_account account1(creds, true); - CHECK_UTF8_EQUAL(defaults_connection_string, account1.to_string(true)); - check_credentials_equal(creds, account1.credentials()); - - auto account2 = azure::storage::cloud_storage_account::parse(defaults_connection_string); - CHECK_UTF8_EQUAL(defaults_connection_string, account2.to_string(true)); - check_credentials_equal(creds, account2.credentials()); + TEST_FIXTURE(test_base, storage_credentials_oauth) + { + utility::string_t token_str = get_random_string(1024); + azure::storage::storage_credentials::bearer_token_credential token_cred; + token_cred.m_bearer_token = token_str; + + azure::storage::storage_credentials creds(token_cred); + CHECK(!creds.is_anonymous()); + CHECK(!creds.is_sas()); + CHECK(!creds.is_shared_key()); + CHECK(creds.is_bearer_token()); + CHECK_UTF8_EQUAL(creds.bearer_token(), token_str); + CHECK_UTF8_EQUAL(token_cred.m_bearer_token, token_str); + + azure::storage::storage_credentials creds2(std::move(token_cred)); + CHECK(creds2.is_bearer_token()); + CHECK_UTF8_EQUAL(creds2.bearer_token(), token_str); + CHECK(token_cred.m_bearer_token.empty()); + + azure::storage::storage_credentials creds3(creds); + CHECK(creds.is_bearer_token()); + CHECK(creds3.is_bearer_token()); + CHECK_UTF8_EQUAL(creds.bearer_token(), token_str); + CHECK_UTF8_EQUAL(creds3.bearer_token(), token_str); + azure::storage::storage_credentials creds4(std::move(creds3)); + CHECK(creds4.is_bearer_token()); + CHECK(!creds3.is_bearer_token()); + CHECK_UTF8_EQUAL(creds4.bearer_token(), token_str); + + creds3 = creds4; + CHECK(creds3.is_bearer_token()); + CHECK(creds4.is_bearer_token()); + CHECK_UTF8_EQUAL(creds3.bearer_token(), token_str); + CHECK_UTF8_EQUAL(creds4.bearer_token(), token_str); + + const azure::storage::storage_credentials& creds4cr = creds4; + creds3 = std::move(creds4cr); + CHECK(creds3.is_bearer_token()); + CHECK(creds4.is_bearer_token()); + CHECK_UTF8_EQUAL(creds3.bearer_token(), token_str); + CHECK_UTF8_EQUAL(creds4.bearer_token(), token_str); + + creds3 = std::move(creds4); + CHECK(creds3.is_bearer_token()); + CHECK(!creds4.is_bearer_token()); + CHECK_UTF8_EQUAL(creds3.bearer_token(), token_str); + + const azure::storage::storage_credentials creds6 = creds3; + azure::storage::storage_credentials creds7; + creds7 = creds6; + token_str = get_random_string(512); + creds7.set_bearer_token(token_str); + CHECK(creds3.is_bearer_token()); + CHECK(creds6.is_bearer_token()); + CHECK(creds7.is_bearer_token()); + CHECK_UTF8_EQUAL(creds3.bearer_token(), token_str); + CHECK_UTF8_EQUAL(creds6.bearer_token(), token_str); + CHECK_UTF8_EQUAL(creds7.bearer_token(), token_str); + } + + TEST_FIXTURE(test_base, storage_credentials_oauth_operation) + { + using OAuthAccessToken = azure::storage::storage_credentials::bearer_token_credential; + + utility::string_t account_name = test_config::instance().get_oauth_account_name(); + utility::string_t access_token = test_config::instance().get_oauth_token(); + + azure::storage::storage_credentials storage_cred(account_name, OAuthAccessToken{ access_token }); + azure::storage::cloud_storage_account storage_account(storage_cred, /* use https */ true); + + auto blob_client = storage_account.create_cloud_blob_client(); + + auto container_name = test_base::get_random_string(); + auto blob_name = test_base::get_random_string(); + + auto blob_container = blob_client.get_container_reference(container_name); + blob_container.create(); + auto blob = blob_container.get_block_blob_reference(blob_name); + blob.upload_text(_XPLATSTR("Block blob content")); + blob.delete_blob(); + blob_container.delete_container(); } TEST_FIXTURE(test_base, storage_credentials_move_constructor) @@ -537,6 +608,7 @@ SUITE(Core) CHECK_EQUAL(false, creds2.is_anonymous()); CHECK_EQUAL(false, creds2.is_sas()); CHECK_EQUAL(true, creds2.is_shared_key()); + CHECK_EQUAL(false, creds2.is_bearer_token()); } TEST_FIXTURE(test_base, cloud_storage_account_devstore) @@ -841,23 +913,44 @@ SUITE(Core) TEST_FIXTURE(test_base, account_sas_permission) { - auto account = test_config::instance().account(); + int parallelism = 8; + auto check_account_permission = [](int i) { + auto account = test_config::instance().account(); - azure::storage::account_shared_access_policy policy; - policy.set_expiry(utility::datetime::utc_now() + utility::datetime::from_minutes(90)); - policy.set_address_or_range(azure::storage::shared_access_policy::ip_address_or_range(_XPLATSTR("0.0.0.0"), _XPLATSTR("255.255.255.255"))); - policy.set_protocol(azure::storage::account_shared_access_policy::protocols::https_or_http); - policy.set_service_type((azure::storage::account_shared_access_policy::service_types)0xF); - policy.set_resource_type((azure::storage::account_shared_access_policy::resource_types)0x7); + azure::storage::account_shared_access_policy policy; + policy.set_expiry(utility::datetime::utc_now() + utility::datetime::from_minutes(90)); + policy.set_address_or_range(azure::storage::shared_access_policy::ip_address_or_range(_XPLATSTR("0.0.0.0"), _XPLATSTR("255.255.255.255"))); + policy.set_protocol(azure::storage::account_shared_access_policy::protocols::https_or_http); + policy.set_service_type((azure::storage::account_shared_access_policy::service_types)0xF); + policy.set_resource_type((azure::storage::account_shared_access_policy::resource_types)0x7); - for (int i = 1; i < 0x100; i++) - { policy.set_permissions((uint8_t)i); check_account_sas_permission_blob(account, policy); check_account_sas_permission_queue(account, policy); check_account_sas_permission_table(account, policy); check_account_sas_permission_file(account, policy); + }; + + std::vector> results; + auto wait_on_results = [&results]() + { + for (const auto& r : results) + { + r.wait(); + } + results.clear(); + }; + + for (int i = 1; i < 0x100; ++i) + { + results.emplace_back(std::async(std::launch::async, check_account_permission, i)); + + if (results.size() >= parallelism) + { + wait_on_results(); + } } + wait_on_results(); } TEST_FIXTURE(test_base, account_sas_service_types) @@ -927,9 +1020,11 @@ SUITE(Core) { auto account = test_config::instance().account(); auto blob_host = account.blob_endpoint().primary_uri().host(); + auto blob_port = account.blob_endpoint().primary_uri().port(); web::uri_builder blob_endpoint; blob_endpoint.set_scheme(_XPLATSTR("http")); blob_endpoint.set_host(blob_host); + blob_endpoint.set_port(blob_port); azure::storage::account_shared_access_policy policy; policy.set_expiry(utility::datetime::utc_now() + utility::datetime::from_seconds(120)); @@ -983,8 +1078,79 @@ SUITE(Core) auto error_details = op.request_results().back().extended_error().details(); auto source_ip = error_details[_XPLATSTR("SourceIP")]; - policy.set_address_or_range(azure::storage::shared_access_policy::ip_address_or_range(source_ip)); - sas_token = account.get_shared_access_signature(policy); - azure::storage::cloud_blob_client(account.blob_endpoint(), azure::storage::storage_credentials(sas_token)).list_containers(_XPLATSTR("prefix")); + if (!source_ip.empty()) + { + policy.set_address_or_range(azure::storage::shared_access_policy::ip_address_or_range(source_ip)); + sas_token = account.get_shared_access_signature(policy); + azure::storage::cloud_blob_client(account.blob_endpoint(), azure::storage::storage_credentials(sas_token)).list_containers(_XPLATSTR("prefix")); + } + } + + TEST_FIXTURE(test_base, user_delegation_sas) + { + using OAuthAccessToken = azure::storage::storage_credentials::bearer_token_credential; + using SASToken = azure::storage::storage_credentials::sas_credential; + + utility::string_t account_name = test_config::instance().get_oauth_account_name(); + utility::string_t access_token = test_config::instance().get_oauth_token(); + + azure::storage::storage_credentials storage_cred(account_name, OAuthAccessToken{ access_token }); + azure::storage::cloud_storage_account storage_account(storage_cred, true); + + auto container_name = test_base::get_random_string(); + auto blob_name = test_base::get_random_string(); + auto blob_content1 = test_base::get_random_string(); + auto blob_content2 = test_base::get_random_string(); + + auto blob_client = storage_account.create_cloud_blob_client(); + auto container = blob_client.get_container_reference(container_name); + container.create(); + auto blob = container.get_block_blob_reference(blob_name); + blob.upload_text(blob_content1); + auto blob_snapshot = blob.create_snapshot(); + blob.upload_text(blob_content2); + + utility::datetime start = utility::datetime::utc_now() - utility::datetime::from_days(1); + utility::datetime end = utility::datetime::utc_now() + utility::datetime::from_days(1); + auto key = blob_client.get_user_delegation_key(start, end); + + // container sas + azure::storage::blob_shared_access_policy access_policy(start, end, azure::storage::blob_shared_access_policy::write | azure::storage::blob_shared_access_policy::create); + auto sas_token = container.get_user_delegation_sas(key, access_policy); + { + azure::storage::storage_credentials sas_cred(account_name, SASToken{ sas_token }); + azure::storage::cloud_storage_account sas_account(sas_cred, true); + + auto sas_blob_client = sas_account.create_cloud_blob_client(); + auto sas_container = sas_blob_client.get_container_reference(container_name); + auto sas_blob = sas_container.get_block_blob_reference(test_base::get_random_string()); + sas_blob.upload_text(_XPLATSTR("Block blob content")); + CHECK_THROW(sas_blob.delete_blob(), azure::storage::storage_exception); + } + + // blob sas + access_policy = azure::storage::blob_shared_access_policy(utility::datetime(), end, azure::storage::blob_shared_access_policy::read); + azure::storage::cloud_blob_shared_access_headers headers; + headers.set_content_type(_XPLATSTR("text/plain; charset=utf-8")); + sas_token = blob.get_user_delegation_sas(key, access_policy, headers); + { + azure::storage::storage_credentials sas_cred(account_name, SASToken{ sas_token }); + azure::storage::cloud_block_blob sas_blob(blob.uri(), sas_cred); + + CHECK_UTF8_EQUAL(blob_content2, sas_blob.download_text()); + CHECK_THROW(sas_blob.delete_blob(), azure::storage::storage_exception); + } + + // blob snapshot sas + sas_token = blob_snapshot.get_user_delegation_sas(key, access_policy); + { + azure::storage::storage_credentials sas_cred(account_name, SASToken{ sas_token }); + azure::storage::cloud_block_blob sas_blob(blob_snapshot.uri(), blob_snapshot.snapshot_time(), sas_cred); + + CHECK_UTF8_EQUAL(blob_content1, sas_blob.download_text()); + CHECK_THROW(sas_blob.delete_blob(), azure::storage::storage_exception); + } + + container.delete_container(); } } diff --git a/Microsoft.WindowsAzure.Storage/tests/cloud_table_test.cpp b/Microsoft.WindowsAzure.Storage/tests/cloud_table_test.cpp index 9986b857..4f8b5a25 100644 --- a/Microsoft.WindowsAzure.Storage/tests/cloud_table_test.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/cloud_table_test.cpp @@ -134,6 +134,11 @@ SUITE(Table) CHECK(property.str().size() > 0); + // Reset property + property = azure::storage::entity_property(); + CHECK(property.property_type() != azure::storage::edm_type::binary); + CHECK(property.is_null()); + value = get_random_binary_data(); property.set_value(value); @@ -2732,9 +2737,11 @@ SUITE(Table) CHECK(output_document.is_object()); CHECK(output_document.as_object().find(_XPLATSTR("DoubleProperty")) != output_document.as_object().cend()); - CHECK_EQUAL(web::json::value::value_type::Number, output_document.as_object().find(_XPLATSTR("DoubleProperty"))->second.type()); - CHECK(output_document.as_object().find(_XPLATSTR("DoubleProperty"))->second.is_double()); - CHECK_EQUAL(double_value, output_document.as_object().find(_XPLATSTR("DoubleProperty"))->second.as_double()); + auto num = output_document.as_object().find(_XPLATSTR("DoubleProperty"))->second; + CHECK_EQUAL(web::json::value::value_type::Number, num.type()); + // Casablanca cannot take doubles like 0.0, 1.0, 2.00 to be double value, so if a number is seen as not a double but a integer, need to check that it is a whole number + CHECK(num.is_double() || (round(num.as_double()) == num.as_double())); + CHECK_EQUAL(double_value, num.as_double()); } } diff --git a/Microsoft.WindowsAzure.Storage/tests/executor_test.cpp b/Microsoft.WindowsAzure.Storage/tests/executor_test.cpp index 70636229..7c177fbb 100644 --- a/Microsoft.WindowsAzure.Storage/tests/executor_test.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/executor_test.cpp @@ -18,6 +18,7 @@ #include "stdafx.h" #include "blob_test_base.h" #include "check_macros.h" +#include "wascore/util.h" SUITE(Core) { @@ -168,4 +169,74 @@ SUITE(Core) CHECK_EQUAL(true, caught_storage_exception); CHECK_EQUAL(true, caught_http_exception); } + +#ifdef _WIN32 + class delayed_scheduler : public azure::storage::delayed_scheduler_interface + { + public: + virtual void schedule_after(pplx::TaskProc_t function, void* context, long long delayInMs) override + { + std::this_thread::sleep_for(std::chrono::milliseconds(delayInMs)); + function(context); + } + }; + + TEST_FIXTURE(block_blob_test_base, verify_retry_after_delay) + { + azure::storage::set_wastorage_ambient_delayed_scheduler(std::make_shared()); + + const size_t buffer_size = 1024; + std::vector buffer; + buffer.resize(buffer_size); + auto md5 = fill_buffer_and_get_md5(buffer); + auto stream = concurrency::streams::bytestream::open_istream(buffer); + + azure::storage::operation_context context; + static bool throwException = true; + context.set_response_received([](web::http::http_request&, const web::http::http_response&, azure::storage::operation_context context) + { + if (throwException) + { + throwException = false; + throw azure::storage::storage_exception("retry"); + } + }); + + bool failed = false; + try + { + m_blob.upload_block(get_block_id(0), stream, md5, azure::storage::access_condition(), azure::storage::blob_request_options(), context); + } + catch (azure::storage::storage_exception&) + { + failed = true; + } + + azure::storage::set_wastorage_ambient_delayed_scheduler(nullptr); + CHECK_EQUAL(false, failed); + CHECK_EQUAL(false, throwException); + } + +#else + TEST_FIXTURE(test_base, ssl_context_callback) + { + // Test the ssl context is set to the dependency. + auto client = test_config::instance().account().create_cloud_blob_client(); + CHECK_EQUAL(_XPLATSTR("https"), client.base_uri().primary_uri().scheme());// Needs to invoke ssl check for this test. + azure::storage::operation_context context; + context.set_ssl_context_callback([](boost::asio::ssl::context& context)-> void { + throw std::runtime_error("dummy exception"); }); + auto container = client.get_container_reference(_XPLATSTR("this-container-does-not-exist")); + CHECK_THROW(container.exists(azure::storage::blob_request_options(), context), std::runtime_error); + + // Test reusable client can be reused. + web::http::client::http_client_config config; + config.set_ssl_context_callback([](boost::asio::ssl::context& context)-> void { + throw std::runtime_error("dummy exception"); }); + auto first_client = azure::storage::core::http_client_reusable::get_http_client(azure::storage::storage_uri(_XPLATSTR("http://www.nonexistenthost.com/test1")).primary_uri(), config); + auto second_client = azure::storage::core::http_client_reusable::get_http_client(azure::storage::storage_uri(_XPLATSTR("http://www.nonexistenthost.com/test1")).primary_uri(), config); + // check the client is identical. + CHECK_EQUAL(first_client, second_client); + } +#endif } diff --git a/Microsoft.WindowsAzure.Storage/tests/file_test_base.cpp b/Microsoft.WindowsAzure.Storage/tests/file_test_base.cpp index 2c75859c..912d2d36 100644 --- a/Microsoft.WindowsAzure.Storage/tests/file_test_base.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/file_test_base.cpp @@ -19,17 +19,19 @@ #include "file_test_base.h" #include "check_macros.h" +#include "wascore/util.h" + utility::string_t file_service_test_base::get_random_share_name(size_t length) { utility::string_t name; name.resize(length); std::generate_n(name.begin(), length, []() -> utility::char_t { - const utility::char_t possible_chars[] = { _XPLATSTR("abcdefghijklmnopqrstuvwxyz1234567890") }; - return possible_chars[std::rand() % (sizeof(possible_chars) / sizeof(utility::char_t) - 1)]; + const utility::string_t possible_chars = _XPLATSTR("abcdefghijklmnopqrstuvwxyz1234567890"); + return possible_chars[get_random_int32() % possible_chars.length()]; }); - return utility::conversions::print_string(utility::datetime::utc_now().to_interval()) + name; + return azure::storage::core::convert_to_string(utility::datetime::utc_now().to_interval()) + name; } void file_service_test_base::check_equal(const azure::storage::cloud_file_share& source, const azure::storage::cloud_file_share& target) @@ -67,7 +69,7 @@ void file_service_test_base_with_objects_to_delete::create_share(const utility:: { for (size_t i = 0; i < num; ++i) { - auto index = utility::conversions::print_string(i); + auto index = azure::storage::core::convert_to_string(i); auto share = m_client.get_share_reference(prefix + index); m_shares_to_delete.push_back(share); share.metadata()[_XPLATSTR("index")] = index; @@ -210,4 +212,4 @@ void file_share_test_base::check_access(const utility::string_t& sas_token, uint { CHECK_THROW(file.delete_file(azure::storage::file_access_condition(), azure::storage::file_request_options(), m_context), azure::storage::storage_exception); } -} \ No newline at end of file +} diff --git a/Microsoft.WindowsAzure.Storage/tests/file_test_base.h b/Microsoft.WindowsAzure.Storage/tests/file_test_base.h index 1aec114b..b8cb3053 100644 --- a/Microsoft.WindowsAzure.Storage/tests/file_test_base.h +++ b/Microsoft.WindowsAzure.Storage/tests/file_test_base.h @@ -26,8 +26,6 @@ #include "was/file.h" #include "was/blob.h" -extern const utility::string_t dummy_md5;//(_XPLATSTR("MDAwMDAwMDA=")); - class file_service_test_base : public test_base { public: diff --git a/Microsoft.WindowsAzure.Storage/tests/main.cpp b/Microsoft.WindowsAzure.Storage/tests/main.cpp index 2e6f4af9..82b6f8ee 100644 --- a/Microsoft.WindowsAzure.Storage/tests/main.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/main.cpp @@ -17,6 +17,10 @@ #include "stdafx.h" +#include +#include +#include + #include "was/blob.h" #ifndef _WIN32 @@ -30,17 +34,128 @@ #endif -int run_tests(const char* suite_name, const char* test_name) +class custom_test_reporter : public UnitTest::TestReporter { - UnitTest::TestReporterStdout reporter; - UnitTest::TestRunner runner(reporter); - return runner.RunTestsIf(UnitTest::Test::GetTestList(), suite_name, [test_name] (UnitTest::Test* test) -> bool +public: + custom_test_reporter() : m_reporter(std::make_shared()) {} + ~custom_test_reporter() override {} + + void ReportTestStart(const UnitTest::TestDetails& test) override + { + m_reporter->ReportTestStart(test); + } + + void ReportFailure(const UnitTest::TestDetails& test, char const* failure) override { - return (test_name == NULL) || (!strcmp(test_name, test->m_details.testName)); - }, 0); + std::string suite_name = test.suiteName; + std::string test_name = test.testName; + std::string full_name = suite_name + ":" + test_name; + if (m_failed_tests.empty() || m_failed_tests.back() != full_name) + { + m_failed_tests.emplace_back(full_name); + } + m_reporter->ReportFailure(test, failure); + } + + void ReportTestFinish(const UnitTest::TestDetails& test, float secondsElapsed) override + { + m_reporter->ReportTestFinish(test, secondsElapsed); + } + + void ReportSummary(int totalTestCount, int failedTestCount, int failureCount, float secondsElapsed) override + { + m_reporter->ReportSummary(totalTestCount, failedTestCount, failureCount, secondsElapsed); + } + + const std::vector& GetFailedTests() const + { + return m_failed_tests; + } + +private: + // Since UnitTest::TestReporterStdout privatized all methods, we cannot directly inherit from it. Use this as a workaround. + std::shared_ptr m_reporter; + std::vector m_failed_tests; +}; + +struct retry_policy +{ + // If m out of n retries succeed, this test case is considered passed. + int m{ 0 }; + int n{ 0 }; +}; + +int run_tests(const std::unordered_set& included_cases, const std::unordered_set& excluded_cases, retry_policy retry_policy, const std::string& warning_message_prefix) +{ + std::unordered_map failed_testcases; + { + custom_test_reporter reporter; + UnitTest::TestRunner runner(reporter); + + auto match = [](const std::unordered_set& all_cases, const std::string& suite_name, const std::string& test_name) + { + return all_cases.find(suite_name) != all_cases.end() || all_cases.find(suite_name + ":" + test_name) != all_cases.end(); + }; + + runner.RunTestsIf(UnitTest::Test::GetTestList(), nullptr, [match, included_cases, excluded_cases](const UnitTest::Test* test) -> bool + { + std::string suite_name = test->m_details.suiteName; + std::string test_name = test->m_details.testName; + + return !match(excluded_cases, suite_name, test_name) && (included_cases.empty() || match(included_cases, suite_name, test_name)); + }, 0); + + if (retry_policy.m > 0) + { + for (const auto& t : reporter.GetFailedTests()) + { + failed_testcases.emplace(t, 0); + } + } + } + + for (int i = 0; i < retry_policy.n && !failed_testcases.empty(); ++i) + { + custom_test_reporter reporter; + UnitTest::TestRunner runner(reporter); + + runner.RunTestsIf(UnitTest::Test::GetTestList(), nullptr, [&failed_testcases, i, retry_policy](const UnitTest::Test* test) -> bool + { + std::string suite_name = test->m_details.suiteName; + std::string test_name = test->m_details.testName; + std::string full_name = suite_name + ":" + test_name; + + auto ite = failed_testcases.find(full_name); + if (ite != failed_testcases.end()) + { + int failed_count = ite->second; + int successful_count = i - failed_count; + return successful_count < retry_policy.m && failed_count < retry_policy.n - retry_policy.m + 1; + } + return false; + }, 0); + + for (const auto& t : reporter.GetFailedTests()) + { + ++failed_testcases[t]; + } + } + + int num_failed = 0; + for (const auto& p : failed_testcases) + { + fprintf(stderr, "%s%s failed %d time(s)\n", warning_message_prefix.data(), p.first.data(), p.second + 1); + + if (p.second > retry_policy.n - retry_policy.m) + { + ++num_failed; + } + } + + return num_failed; } -int main(int argc, const char* argv[]) +int main(int argc, const char** argv) { azure::storage::operation_context::set_default_log_level(azure::storage::client_log_level::log_level_verbose); @@ -58,30 +173,59 @@ int main(int argc, const char* argv[]) #endif - int failure_count; - if (argc == 1) + std::unordered_set included_cases; + std::unordered_set excluded_cases; + retry_policy retry_policy; + std::string warning_message_prefix; + + auto starts_with = [](const std::string& str, const std::string& prefix) { - failure_count = run_tests(NULL, NULL); - } - else + size_t i = 0; + while (i < str.length() && i < prefix.length() && prefix[i] == str[i]) ++i; + return i == prefix.length(); + }; + + auto add_to = [](std::unordered_set& all_cases, const std::string& name) + { + auto colon_pos = name.find(":"); + std::string suite_name = name.substr(0, colon_pos); + if (suite_name.empty()) + { + throw std::invalid_argument("Invalid test case \"" + name + "\"."); + } + all_cases.emplace(name); + }; + + for (int i = 1; i < argc; ++i) { - failure_count = 0; - for (int i = 1; i < argc; ++i) + std::string arg(argv[i]); + if (starts_with(arg, "--retry-policy=")) { - std::string arg(argv[i]); - auto colon = arg.find(':'); - if (colon == std::string::npos) + size_t pos; + std::string policy_str = arg.substr(arg.find('=') + 1); + retry_policy.m = std::stoi(policy_str, &pos); + if (pos != policy_str.size()) { - failure_count += run_tests(argv[i], NULL); + retry_policy.n = std::stoi(policy_str.substr(pos + 1)); } else { - auto suite_name = arg.substr(0, colon); - auto test_name = arg.substr(colon + 1); - failure_count += run_tests(suite_name.c_str(), test_name.c_str()); + retry_policy.n = retry_policy.m; } } + else if (starts_with(arg, "--warning-message=")) + { + warning_message_prefix = arg.substr(arg.find('=') + 1); + } + else if (starts_with(arg, "--exclude=")) + { + add_to(excluded_cases, arg.substr(arg.find('=') + 1)); + } + else + { + add_to(included_cases, arg); + } } - return failure_count; + return run_tests(included_cases, excluded_cases, retry_policy, warning_message_prefix); } diff --git a/Microsoft.WindowsAzure.Storage/tests/packages.config b/Microsoft.WindowsAzure.Storage/tests/packages.config deleted file mode 100644 index f08160b6..00000000 --- a/Microsoft.WindowsAzure.Storage/tests/packages.config +++ /dev/null @@ -1,5 +0,0 @@ - - - - - \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/tests/read_from_secondary_test.cpp b/Microsoft.WindowsAzure.Storage/tests/read_from_secondary_test.cpp index 5ca686fd..b01a5909 100644 --- a/Microsoft.WindowsAzure.Storage/tests/read_from_secondary_test.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/read_from_secondary_test.cpp @@ -20,6 +20,8 @@ #include "check_macros.h" #include "blob_test_base.h" +#include "wascore/util.h" + class basic_always_retry_policy : public azure::storage::basic_retry_policy { public: @@ -86,7 +88,7 @@ class multi_location_test_helper }); } - ~multi_location_test_helper() + ~multi_location_test_helper() noexcept(false) { m_context.set_sending_request(std::function()); @@ -99,7 +101,8 @@ class multi_location_test_helper // This check assumes that datetime::to_interval() returns the time in microseconds/10 std::chrono::microseconds interval((m_context.request_results()[m_context_results_offset + i + 1].start_time().to_interval() - m_context.request_results()[m_context_results_offset + i].end_time().to_interval()) / 10); - CHECK(m_retry_info_list[i].retry_interval() < interval); + const std::chrono::milliseconds deviation(1); + CHECK(m_retry_info_list[i].retry_interval() - deviation < interval); } } @@ -239,7 +242,7 @@ SUITE(Core) { for (int i = 0; i < 2; i++) { - auto index = utility::conversions::print_string(i); + auto index = azure::storage::core::convert_to_string(i); auto blob = m_container.get_block_blob_reference(_XPLATSTR("blockblob") + index); blob.upload_text(blob.name(), azure::storage::access_condition(), azure::storage::blob_request_options(), m_context); diff --git a/Microsoft.WindowsAzure.Storage/tests/result_iterator_test.cpp b/Microsoft.WindowsAzure.Storage/tests/result_iterator_test.cpp index 5ae749df..251e6693 100644 --- a/Microsoft.WindowsAzure.Storage/tests/result_iterator_test.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/result_iterator_test.cpp @@ -18,6 +18,7 @@ #include "stdafx.h" #include "check_macros.h" #include "test_base.h" + #include "wascore/util.h" typedef std::function(const azure::storage::continuation_token &, size_t)> result_generator_type; @@ -58,7 +59,7 @@ class test_result_provider if (!m_return_full_segment) { // return less results - res_segment_size = std::rand() % (res_segment_size + 1); + res_segment_size = get_random_int32() % (res_segment_size + 1); } std::vector res_vec; diff --git a/Microsoft.WindowsAzure.Storage/tests/service_properties_test.cpp b/Microsoft.WindowsAzure.Storage/tests/service_properties_test.cpp index 5e0e4871..29887cba 100644 --- a/Microsoft.WindowsAzure.Storage/tests/service_properties_test.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/service_properties_test.cpp @@ -166,6 +166,7 @@ void test_service_properties(const Client& client, const Options& options, azure add_cors_rule_2(temp_props.cors()); client.upload_service_properties(props, azure::storage::service_properties_includes::all(), options, context); + std::this_thread::sleep_for(std::chrono::seconds(3)); check_service_properties(props, client.download_service_properties(options, context)); { @@ -173,6 +174,7 @@ void test_service_properties(const Client& client, const Options& options, azure includes.set_logging(true); client.upload_service_properties(temp_props, includes, options, context); add_logging_2(props.logging()); + std::this_thread::sleep_for(std::chrono::seconds(3)); check_service_properties(props, client.download_service_properties(options, context)); add_logging_1(temp_props.logging()); } @@ -182,6 +184,7 @@ void test_service_properties(const Client& client, const Options& options, azure includes.set_hour_metrics(true); client.upload_service_properties(temp_props, includes, options, context); add_metrics_2(props.hour_metrics()); + std::this_thread::sleep_for(std::chrono::seconds(3)); check_service_properties(props, client.download_service_properties(options, context)); add_metrics_1(temp_props.hour_metrics()); } @@ -191,6 +194,7 @@ void test_service_properties(const Client& client, const Options& options, azure includes.set_minute_metrics(true); client.upload_service_properties(temp_props, includes, options, context); add_metrics_1(props.minute_metrics()); + std::this_thread::sleep_for(std::chrono::seconds(3)); check_service_properties(props, client.download_service_properties(options, context)); add_metrics_2(temp_props.minute_metrics()); } @@ -200,6 +204,7 @@ void test_service_properties(const Client& client, const Options& options, azure includes.set_cors(true); client.upload_service_properties(temp_props, includes, options, context); props.cors().erase(props.cors().begin()); + std::this_thread::sleep_for(std::chrono::seconds(3)); check_service_properties(props, client.download_service_properties(options, context)); temp_props.cors().clear(); } @@ -213,6 +218,7 @@ void test_service_properties(const Client& client, const Options& options, azure add_metrics_1(props.hour_metrics()); add_metrics_2(props.minute_metrics()); props.cors().clear(); + std::this_thread::sleep_for(std::chrono::seconds(3)); check_service_properties(props, client.download_service_properties(options, context)); } @@ -227,6 +233,7 @@ void test_service_properties(const Client& client, const Options& options, azure add_metrics_3(props.hour_metrics()); add_metrics_4(props.minute_metrics()); props.cors().clear(); + std::this_thread::sleep_for(std::chrono::seconds(3)); check_service_properties(props, client.download_service_properties(options, context)); } @@ -234,6 +241,7 @@ void test_service_properties(const Client& client, const Options& options, azure if (default_version_supported) { client.upload_service_properties(props, azure::storage::service_properties_includes::all(), options, context); + std::this_thread::sleep_for(std::chrono::seconds(3)); check_service_properties(props, client.download_service_properties(options, context)); } else @@ -257,6 +265,7 @@ void test_service_properties(const azure::storage::cloud_file_client& client, co add_cors_rule_2(temp_props.cors()); client.upload_service_properties(props, azure::storage::service_properties_includes::file(), options, context); + std::this_thread::sleep_for(std::chrono::seconds(3)); check_service_properties(props, client.download_service_properties(options, context)); { @@ -264,6 +273,7 @@ void test_service_properties(const azure::storage::cloud_file_client& client, co includes.set_hour_metrics(true); client.upload_service_properties(temp_props, includes, options, context); add_metrics_2(props.hour_metrics()); + std::this_thread::sleep_for(std::chrono::seconds(3)); check_service_properties(props, client.download_service_properties(options, context)); add_metrics_1(temp_props.hour_metrics()); } @@ -273,6 +283,7 @@ void test_service_properties(const azure::storage::cloud_file_client& client, co includes.set_minute_metrics(true); client.upload_service_properties(temp_props, includes, options, context); add_metrics_1(props.minute_metrics()); + std::this_thread::sleep_for(std::chrono::seconds(3)); check_service_properties(props, client.download_service_properties(options, context)); add_metrics_2(temp_props.minute_metrics()); } @@ -282,6 +293,7 @@ void test_service_properties(const azure::storage::cloud_file_client& client, co includes.set_cors(true); client.upload_service_properties(temp_props, includes, options, context); props.cors().erase(props.cors().begin()); + std::this_thread::sleep_for(std::chrono::seconds(3)); check_service_properties(props, client.download_service_properties(options, context)); temp_props.cors().clear(); } @@ -295,6 +307,7 @@ void test_service_properties(const azure::storage::cloud_file_client& client, co add_metrics_1(props.hour_metrics()); add_metrics_2(props.minute_metrics()); props.cors().clear(); + std::this_thread::sleep_for(std::chrono::seconds(3)); check_service_properties(props, client.download_service_properties(options, context)); } @@ -309,6 +322,7 @@ void test_service_properties(const azure::storage::cloud_file_client& client, co add_metrics_3(props.hour_metrics()); add_metrics_4(props.minute_metrics()); props.cors().clear(); + std::this_thread::sleep_for(std::chrono::seconds(3)); check_service_properties(props, client.download_service_properties(options, context)); } @@ -316,7 +330,9 @@ void test_service_properties(const azure::storage::cloud_file_client& client, co if (default_version_supported) { client.upload_service_properties(props, azure::storage::service_properties_includes::all(), options, context); + std::this_thread::sleep_for(std::chrono::seconds(3)); check_service_properties(props, client.download_service_properties(options, context)); + std::cout << "upload default service properties successfully" << std::endl; } else { diff --git a/Microsoft.WindowsAzure.Storage/tests/test_base.cpp b/Microsoft.WindowsAzure.Storage/tests/test_base.cpp index d8e3e4af..ba01d9a7 100644 --- a/Microsoft.WindowsAzure.Storage/tests/test_base.cpp +++ b/Microsoft.WindowsAzure.Storage/tests/test_base.cpp @@ -17,11 +17,39 @@ #include "stdafx.h" +#include +#include + #include "test_base.h" #include "cpprest/json.h" +static thread_local std::mt19937_64 random_generator(std::random_device{}()); + +bool get_random_boolean() +{ + std::uniform_int_distribution distribution(0, 1); + return distribution(random_generator) == 0; +} + +int32_t get_random_int32() +{ + std::uniform_int_distribution distribution(0, std::numeric_limits::max()); + return distribution(random_generator); +} + +int64_t get_random_int64() +{ + std::uniform_int_distribution distribution(0LL, std::numeric_limits::max()); + return distribution(random_generator); +} + +double get_random_double() +{ + std::uniform_real_distribution distribution(0, 1.0); + return distribution(random_generator); +} + utility::string_t test_base::object_name_prefix = utility::string_t(_XPLATSTR("nativeclientlibraryunittest")); -bool test_base::is_random_initialized = false; test_config::test_config() { @@ -32,6 +60,8 @@ test_config::test_config() config_file >> config; auto target_name = config[_XPLATSTR("target")].as_string(); + auto premium_target_name = config[_XPLATSTR("premium_target")].as_string(); + auto blob_storage_target_name = config[_XPLATSTR("blob_storage_target")].as_string(); web::json::value& tenants = config[_XPLATSTR("tenants")]; for (web::json::array::const_iterator it = tenants.as_array().cbegin(); it != tenants.as_array().cend(); ++it) @@ -52,103 +82,195 @@ test_config::test_config() const web::json::value& connection_string_obj = it->at(_XPLATSTR("connection_string")); m_account = azure::storage::cloud_storage_account::parse(connection_string_obj.as_string()); } - - break; + } + else if (name_obj.as_string() == premium_target_name) + { + if (!it->has_field(_XPLATSTR("connection_string"))) + { + azure::storage::storage_credentials credentials(it->at(_XPLATSTR("account_name")).as_string(), it->at(_XPLATSTR("account_key")).as_string()); + azure::storage::storage_uri blob_uri(it->at(_XPLATSTR("blob_primary_endpoint")).as_string(), it->at(_XPLATSTR("blob_secondary_endpoint")).as_string()); + azure::storage::storage_uri queue_uri(it->at(_XPLATSTR("queue_primary_endpoint")).as_string(), it->at(_XPLATSTR("queue_secondary_endpoint")).as_string()); + azure::storage::storage_uri table_uri(it->at(_XPLATSTR("table_primary_endpoint")).as_string(), it->at(_XPLATSTR("table_secondary_endpoint")).as_string()); + m_premium_account = azure::storage::cloud_storage_account(credentials, blob_uri, queue_uri, table_uri); + } + else + { + const web::json::value& connection_string_obj = it->at(_XPLATSTR("connection_string")); + m_premium_account = azure::storage::cloud_storage_account::parse(connection_string_obj.as_string()); + } + } + else if (name_obj.as_string() == blob_storage_target_name) + { + if (!it->has_field(_XPLATSTR("connection_string"))) + { + azure::storage::storage_credentials credentials(it->at(_XPLATSTR("account_name")).as_string(), it->at(_XPLATSTR("account_key")).as_string()); + azure::storage::storage_uri blob_uri(it->at(_XPLATSTR("blob_primary_endpoint")).as_string(), it->at(_XPLATSTR("blob_secondary_endpoint")).as_string()); + azure::storage::storage_uri queue_uri(it->at(_XPLATSTR("queue_primary_endpoint")).as_string(), it->at(_XPLATSTR("queue_secondary_endpoint")).as_string()); + azure::storage::storage_uri table_uri(it->at(_XPLATSTR("table_primary_endpoint")).as_string(), it->at(_XPLATSTR("table_secondary_endpoint")).as_string()); + m_blob_storage_account = azure::storage::cloud_storage_account(credentials, blob_uri, queue_uri, table_uri); + } + else + { + const web::json::value& connection_string_obj = it->at(_XPLATSTR("connection_string")); + m_blob_storage_account = azure::storage::cloud_storage_account::parse(connection_string_obj.as_string()); + } } } -} -void test_base::print_client_request_id(const azure::storage::operation_context& context, const utility::string_t& purpose) -{ - std::string suite_name(UnitTest::CurrentTest::Details()->suiteName); - std::string test_name(UnitTest::CurrentTest::Details()->testName); - ucout << utility::conversions::to_string_t(suite_name) << _XPLATSTR(":") << utility::conversions::to_string_t(test_name) << _XPLATSTR(": ") << purpose << _XPLATSTR(" client request ID: ") << context.client_request_id() << std::endl; + web::json::value& token_information = config[_XPLATSTR("token_information")]; + + m_token_account_name = token_information.at(_XPLATSTR("account_name")).as_string(); + m_token_tenant_id = token_information.at(_XPLATSTR("tenant_id")).as_string(); + m_token_client_id = token_information.at(_XPLATSTR("client_id")).as_string(); + m_token_client_secret = token_information.at(_XPLATSTR("client_secret")).as_string(); + m_token_resource = token_information.at(_XPLATSTR("resource")).as_string(); } -utility::string_t test_base::get_string(utility::char_t value1, utility::char_t value2) +const utility::string_t& test_config::get_oauth_account_name() const { - utility::ostringstream_t result; - result << value1 << value2; - return result.str(); + return m_token_account_name; } -void test_base::initialize_random() +utility::string_t test_config::get_oauth_token() const { - if (!is_random_initialized) + utility::string_t uri = _XPLATSTR("https://login.microsoftonline.com/") + m_token_tenant_id + _XPLATSTR("/oauth2/token"); + + utility::string_t body; + body += _XPLATSTR("grant_type=client_credentials&"); + body += _XPLATSTR("client_id=") + m_token_client_id + _XPLATSTR("&"); + body += _XPLATSTR("client_secret=") + m_token_client_secret + _XPLATSTR("&"); + body += _XPLATSTR("resource=") + m_token_resource; + + web::http::http_request request; + request.set_method(web::http::methods::POST); + request.set_body(body, _XPLATSTR("application/x-www-form-urlencoded")); + + utility::string_t access_token; + + web::http::client::http_client client(uri); + auto get_token_task = client.request(request).then([](web::http::http_response response) { - srand((unsigned int)time(NULL)); - is_random_initialized = true; + if (response.status_code() == web::http::status_codes::OK) + { + return response.extract_json(); + } + return pplx::task_from_result(web::json::value()); + }).then([&access_token](pplx::task json_task) + { + const auto& json_value = json_task.get(); + access_token = json_value.at(_XPLATSTR("access_token")).as_string(); + }); + + try + { + get_token_task.wait(); + } + catch (std::exception& e) + { + std::cout << "Cannot get OAuth access token: " << e.what() << std::endl; } -} -bool test_base::get_random_boolean() -{ - initialize_random(); - return (rand() & 0x1) == 0; + return access_token; } -int32_t test_base::get_random_int32() +utility::datetime test_base::parse_datetime(const utility::string_t& value, utility::datetime::date_format format) { - initialize_random(); - return (int32_t)rand() << 16 | (int32_t)rand(); + if (!value.empty()) + { + return utility::datetime::from_string(value, format); + } + else + { + return utility::datetime(); + } } -int64_t test_base::get_random_int64() +void test_base::print_client_request_id(const azure::storage::operation_context& context, const utility::string_t& purpose) { - initialize_random(); - return (int64_t)rand() << 48 | (int64_t)rand() << 32 | (int64_t)rand() << 16 | (int64_t)rand(); + std::string suite_name(UnitTest::CurrentTest::Details()->suiteName); + std::string test_name(UnitTest::CurrentTest::Details()->testName); + ucout << utility::conversions::to_string_t(suite_name) << _XPLATSTR(":") << utility::conversions::to_string_t(test_name) << _XPLATSTR(": ") << purpose << _XPLATSTR(" client request ID: ") << context.client_request_id() << std::endl; } -double test_base::get_random_double() +utility::string_t test_base::get_string(utility::char_t value1, utility::char_t value2) { - initialize_random(); - return (double)rand() / RAND_MAX; + utility::ostringstream_t result; + result << value1 << value2; + return result.str(); } -utility::string_t test_base::get_random_string(const std::vector charset, size_t size) +utility::string_t test_base::get_random_string(const std::vector& charset, size_t size) { - initialize_random(); utility::string_t result; result.reserve(size); + std::uniform_int_distribution distribution(0, charset.size() - 1); for (size_t i = 0; i < size; ++i) { - result.push_back(charset[rand() % charset.size()]); + result.push_back(charset[distribution(random_generator)]); } - return result; } utility::string_t test_base::get_random_string(size_t size) { - initialize_random(); - utility::string_t result; - result.reserve(size); - for (size_t i = 0; i < size; ++i) - { - result.push_back((utility::char_t) (_XPLATSTR('0') + rand() % 10)); - } - return result; + const static std::vector charset { + _XPLATSTR('0'), _XPLATSTR('1'), _XPLATSTR('2'), _XPLATSTR('3'), _XPLATSTR('4'), + _XPLATSTR('5'), _XPLATSTR('6'), _XPLATSTR('7'), _XPLATSTR('8'), _XPLATSTR('9'), + }; + return get_random_string(charset, size); } utility::datetime test_base::get_random_datetime() { - initialize_random(); - return utility::datetime::utc_now() + rand(); + return utility::datetime::utc_now() + get_random_int32(); } std::vector test_base::get_random_binary_data() { - initialize_random(); const int SIZE = 100; std::vector result; result.reserve(SIZE); + std::uniform_int_distribution distribution(std::numeric_limits::min(), std::numeric_limits::max()); for (int i = 0; i < SIZE; ++i) { - result.push_back((unsigned char)(rand() % 256)); + result.push_back(uint8_t(distribution(random_generator))); } return result; } +void test_base::fill_buffer(std::vector& buffer) +{ + fill_buffer(buffer, 0, buffer.size()); +} + +void test_base::fill_buffer(std::vector& buffer, size_t offset, size_t count) +{ + using rand_int_type = uint64_t; + const size_t rand_int_size = sizeof(rand_int_type); + + uint8_t* start_addr = buffer.data() + offset; + uint8_t* end_addr = start_addr + count; + + auto random_char_generator = []() -> uint8_t + { + return uint8_t(get_random_int32()); + }; + + while (uintptr_t(start_addr) % rand_int_size != 0 && start_addr < end_addr) + { + *(start_addr++) = random_char_generator(); + } + + std::uniform_int_distribution distribution(0LL, std::numeric_limits::max()); + while (start_addr + rand_int_size <= end_addr) + { + *reinterpret_cast(start_addr) = distribution(random_generator); + start_addr += rand_int_size; + } + + std::generate(start_addr, end_addr, random_char_generator); +} + utility::uuid test_base::get_random_guid() { return utility::new_uuid(); diff --git a/Microsoft.WindowsAzure.Storage/tests/test_base.h b/Microsoft.WindowsAzure.Storage/tests/test_base.h index db12d8b5..1e33fee1 100644 --- a/Microsoft.WindowsAzure.Storage/tests/test_base.h +++ b/Microsoft.WindowsAzure.Storage/tests/test_base.h @@ -20,6 +20,17 @@ #include "was/common.h" #include "was/storage_account.h" +#if defined(_WIN32) +#define OPERATION_CANCELED "operation canceled" +#else +#define OPERATION_CANCELED "Operation canceled" +#endif + +bool get_random_boolean(); +int32_t get_random_int32(); +int64_t get_random_int64(); +double get_random_double(); + class test_config { public: @@ -35,11 +46,32 @@ class test_config return m_account; } + const azure::storage::cloud_storage_account& premium_account() const + { + return m_premium_account; + } + + const azure::storage::cloud_storage_account& blob_storage_account() const + { + return m_blob_storage_account; + } + + const utility::string_t& get_oauth_account_name() const; + utility::string_t get_oauth_token() const; + private: test_config(); azure::storage::cloud_storage_account m_account; + azure::storage::cloud_storage_account m_premium_account; + azure::storage::cloud_storage_account m_blob_storage_account; + + utility::string_t m_token_account_name; + utility::string_t m_token_tenant_id; + utility::string_t m_token_client_id; + utility::string_t m_token_client_secret; + utility::string_t m_token_resource; }; class test_base @@ -63,20 +95,39 @@ class test_base azure::storage::operation_context m_context; static utility::string_t object_name_prefix; - static bool is_random_initialized; public: + static utility::datetime parse_datetime(const utility::string_t& value, utility::datetime::date_format format = utility::datetime::date_format::RFC_1123); static utility::string_t get_string(utility::char_t value1, utility::char_t value2); - static void initialize_random(); - static bool get_random_boolean(); - static int32_t get_random_int32(); - static int64_t get_random_int64(); - static double get_random_double(); - static utility::string_t get_random_string(const std::vector charset, size_t size); + static utility::string_t get_random_string(const std::vector& charset, size_t size); static utility::string_t get_random_string(size_t size = 10); static utility::datetime get_random_datetime(); static std::vector get_random_binary_data(); + static void fill_buffer(std::vector& buffer); + static void fill_buffer(std::vector& buffer, size_t offset, size_t count); static utility::uuid get_random_guid(); static utility::string_t get_object_name(const utility::string_t& object_type_name); + + template + static TEnum get_random_enum(TEnum max_enum_value) + { + return static_cast(get_random_int32() % (static_cast(max_enum_value) + 1)); + } + + template + static std::vector transform_if(It it, std::function func_if, std::function func_tran) + { + std::vector results; + It end_it; + while (it != end_it) + { + if (func_if(*it)) + { + results.push_back(func_tran(*it)); + } + ++it; + } + return results; + } }; diff --git a/Microsoft.WindowsAzure.Storage/tests/test_configurations.json b/Microsoft.WindowsAzure.Storage/tests/test_configurations.json index 1e50e39e..a33e2d8a 100644 --- a/Microsoft.WindowsAzure.Storage/tests/test_configurations.json +++ b/Microsoft.WindowsAzure.Storage/tests/test_configurations.json @@ -1,15 +1,34 @@ { - "target": "production", - "tenants": [ - { - "name": "devstore", - "type": "devstore", - "connection_string": "UseDevelopmentStorage=true" - }, - { - "name": "production", - "type": "cloud", - "connection_string": "DefaultEndpointsProtocol=https;AccountName=myaccountname;AccountKey=myaccountkey" - } - ] + "target": "production", + "premium_target": "premium_account", + "blob_storage_target": "blob_storage_account", + "tenants": [ + { + "name": "devstore", + "type": "devstore", + "connection_string": "UseDevelopmentStorage=true" + }, + { + "name": "production", + "type": "cloud", + "connection_string": "DefaultEndpointsProtocol=https;" + }, + { + "name": "premium_account", + "type": "cloud", + "connection_string": "DefaultEndpointsProtocol=https;" + }, + { + "name": "blob_storage_account", + "type": "cloud", + "connection_string": "DefaultEndpointsProtocol=https;" + } + ], + "token_information": { + "account_name": "", + "tenant_id": "", + "client_id": "", + "client_secret": "", + "resource": "https://storage.azure.com" + } } diff --git a/Microsoft.WindowsAzure.Storage/tests/timer_handler_test.cpp b/Microsoft.WindowsAzure.Storage/tests/timer_handler_test.cpp new file mode 100644 index 00000000..f8932faa --- /dev/null +++ b/Microsoft.WindowsAzure.Storage/tests/timer_handler_test.cpp @@ -0,0 +1,74 @@ +// ----------------------------------------------------------------------------------------- +// +// Copyright 2018 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +// +// ----------------------------------------------------------------------------------------- + +#include "stdafx.h" +#include "check_macros.h" +#include "test_base.h" + +#include "wascore/timer_handler.h" + +SUITE(Core) +{ + TEST_FIXTURE(test_base, cancellation_token_source_multiple_cancel_test) + { + std::string exception_msg; + try + { + pplx::cancellation_token_source source; + for (auto i = 0; i < 100; ++i) + { + source.cancel(); + } + } + catch (std::exception& e) + { + exception_msg = e.what(); + } + + CHECK_EQUAL("", exception_msg); + } + + TEST_FIXTURE(test_base, cancellation_token_source_multiple_cancel_concurrent_test) + { + std::string exception_msg; + try + { + for (auto i = 0; i < 5000; ++i) + { + pplx::cancellation_token_source source; + pplx::task_completion_event tce; + std::vector> tasks; + for (auto k = 0; k < 100; ++k) + { + auto task = pplx::create_task(tce).then([source]() { source.cancel(); }); + tasks.push_back(task); + } + tce.set(); + for (auto k = 0; k < 100; ++k) + { + tasks[k].get(); + } + } + } + catch (std::exception& e) + { + exception_msg = e.what(); + } + + CHECK_EQUAL("", exception_msg); + } +} \ No newline at end of file diff --git a/Microsoft.WindowsAzure.Storage/version.rc b/Microsoft.WindowsAzure.Storage/version.rc index 6d40764b..b416fe4b 100644 Binary files a/Microsoft.WindowsAzure.Storage/version.rc and b/Microsoft.WindowsAzure.Storage/version.rc differ diff --git a/README.md b/README.md index 926454c2..a7370193 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,8 @@ -# Azure Storage Client Library for C++ (2.6.0) +# Azure Storage Client Library for C++ (7.5.0) (Deprecated) -The Azure Storage Client Library for C++ allows you to build applications against Microsoft Azure Storage. For an overview of Azure Storage, see [Introduction to Microsoft Azure Storage](http://azure.microsoft.com/en-us/documentation/articles/storage-introduction/). +For more details on the retirement and alternatives to using this project, visit [Retirement notice: The legacy Azure Storage C++ client libraries will be retired on 29 March 2025](https://aka.ms/AzStorageCPPSDKRetirement). + +The Azure Storage Client Library for C++ allows you to build applications against Microsoft Azure Storage. For an overview of Azure Storage, see [Introduction to Microsoft Azure Storage](https://docs.microsoft.com/en-us/azure/storage/common/storage-introduction). # Features @@ -20,10 +22,7 @@ The Azure Storage Client Library for C++ allows you to build applications agains # Getting started -For the best development experience, we recommend that developers use the official Microsoft NuGet packages for libraries. NuGet packages are regularly updated with new functionality and hotfixes. -Download the [NuGet Package](http://www.nuget.org/packages/wastorage). - -Azure Storage Client Library for C++ is also avaiable on Vcpkg since v2.5.0. To get know more about Vcpkg, please visit https://github.com/Microsoft/vcpkg. +For the best development experience, we recommend that developers use the [vcpkg](https://github.com/Microsoft/vcpkg) as the cross-platform library manager. ## Requirements @@ -45,38 +44,94 @@ For general suggestions about Azure, use our [Azure feedback forum](http://feedb ## Download & Install -### Via Git +### Build from Source + +To build with source code, there are three ways to install dependencies: + +- Via vcpkg + + You can manage the dependencies with vcpkg, and use Visual Studio 2015 update 3 or Visual Studio 2017 for development environment. Simply installing Casablanca via vcpkg will setup everything needed. + ```BatchFile + C:\src\vcpkg> .\vcpkg install cpprestsdk + ``` + + If you want to build and run test code, you can install UnitTest++ via vcpkg: + ```BatchFile + C:\src\vcpkg> .\vcpkg install unittest-cpp + ``` + +- Via NuGet + + Because Casablanca does not release NuGet packages anywhere anymore, Starting from 5.1.0, this repository cannot be built with pre-built Casablanca NuGet packages. However, you can export your own version of Casablanca NuGet packages to install dependencies of this project: + ```BatchFile + C:\src\vcpkg> .\vcpkg install cpprestsdk + C:\src\vcpkg> .\vcpkg export --nuget cpprestsdk --nuget-id=Casablanca --nuget-version=2.10.18 + ``` + +- Manage dependencies by yourself + + It is not recommended to manage dependencies by yourself. However, you can still build Casablanca by yourself by following [these instructions](https://github.com/microsoft/cpprestsdk/wiki) and specify the include directories and link binaries. -To create a local clone of the source for the Azure Storage Client Library for C++ via `git`, type: +To create a local clone of the source for the Azure Storage Client Library for C++ via git, type: ```bash git clone https://github.com/Azure/azure-storage-cpp.git cd azure-storage-cpp ``` -### Via NuGet +Follow [Getting Started on Linux](#getting-started-on-linux) or [Getting Started on macOS](#getting-started-on-macos) to build on these two platforms. -To install the binaries for the Azure Storage Client Library for C++, type the following into the [NuGet Package Manager console](http://docs.nuget.org/docs/start-here/using-the-package-manager-console): +To build on Windows, directly open the solution file with Visual Studio in project root directory. -`Install-Package wastorage` +#### Visual Studio Version +Starting from version 6.1.0, Azure Storage Client Library for C++ supports Visual Studio 2015 and Visual Studio 2017. In case you have the need to use Visual Studio 2013, please get [version 6.0.0](https://github.com/Azure/azure-storage-cpp/releases/tag/v6.0.0), to use Visual Studio 2012, please get [version 2.0.0](http://www.nuget.org/packages/wastorage/2.0.0). -### Via Vcpkg +### Via NuGet -To install the Azure Storage Client Library for C++ through Vcpkg, you need Vcpkg installed first. Please follow the instructions(https://github.com/Microsoft/vcpkg#quick-start) to install Vcpkg. +To install the binaries for the Azure Storage Client Library for C++, you can export a NuGet package with vcpkg and put it into your local NuGet feed. For more information about how to export a NuGet package, please see [Binary Export](https://github.com/Microsoft/vcpkg/blob/master/docs/specifications/export-command.md). -install package with: -``` -C:\src\vcpkg> .\vcpkg install azure-storage-cpp +Normally, exporting NuGet package is done with the following command: +```BatchFile +C:\src\vcpkg> .\vcpkg export --nuget azure-storage-cpp --nuget-id=Microsoft.Azure.Storage.CPP --nuget-version=7.1.0 ``` -#### Visual Studio Version -Starting from version 2.1.0, Azure Storage Client Library for C++ supports Visual Studio 2013 and Visual Studio 2015. In case you have the need to use Visual Studio 2012, please get [version 2.0.0](http://www.nuget.org/packages/wastorage/2.0.0). +### Via vcpkg + +To install the Azure Storage Client Library for C++ through vcpkg, you need vcpkg installed first. Please follow the instructions(https://github.com/Microsoft/vcpkg#quick-start) to install vcpkg. +Install package with: +```BatchFile +C:\src\vcpkg> .\vcpkg install azure-storage-cpp +``` ## Dependencies ### C++ REST SDK -The Azure Storage Client Library for C++ depends on the C++ REST SDK (codename "Casablanca") 2.9.1. It can be installed through [NuGet](https://www.nuget.org/packages/cpprestsdk/2.9.1) or downloaded directly from [GitHub](https://github.com/Microsoft/cpprestsdk/releases/tag/v2.9.1). +The Azure Storage Client Library for C++ depends on the C++ REST SDK (codename "Casablanca") It can be installed through vcpkg (`vcpkg install cpprestsdk`) or downloaded directly from [GitHub](https://github.com/Microsoft/cpprestsdk/releases/). + +The validated Casablanca version for each major or recent release on different platforms can be found in the following chart: + + +| azure-storage-cpp's version | Casablanca version for Windows | Casablanca version for Linux | +|-----------------------------|--------------------------------|------------------------------| +| 1.0.0 | 2.4.0 | 2.4.0 | +| 2.0.0 | 2.4.0 | 2.4.0 | +| 3.0.0 | 2.9.1 | 2.9.1 | +| 4.0.0 | 2.9.1 | 2.9.1 | +| 5.0.0 | 2.9.1 | 2.9.1 | +| 5.0.1 | 2.9.1 | 2.9.1 | +| 5.1.0 | 2.10.6 | 2.10.3 | +| 5.1.1 | 2.10.6 | 2.10.3 | +| 5.2.0 | 2.10.6 | 2.10.3 | +| 6.0.0 | 2.10.10 | 2.10.10 | +| 6.1.0 | 2.10.13 | 2.10.13 | +| 7.0.0 | 2.10.14 | 2.10.14 | +| 7.1.0 | 2.10.14 | 2.10.14 | +| 7.2.0 | 2.10.14 | 2.10.14 | +| 7.3.0 | 2.10.15 | 2.10.15 | +| 7.3.1 | 2.10.15 | 2.10.15 | +| 7.4.0 | 2.10.16 | 2.10.16 | +| 7.5.0 | 2.10.18 | 2.10.18 | ## Code Samples @@ -88,7 +143,9 @@ To get started with the coding, please visit the following articles: To accomplish specific tasks, please find the code samples at [samples folder](Microsoft.WindowsAzure.Storage/samples). ## Getting Started on Linux -As mentioned above, the Azure Storage Client Library for C++ depends on Casablanca. Follow [these instructions](https://github.com/Microsoft/cpprestsdk/wiki/How-to-build-for-Linux) to compile it. Current version of the library depends on Casablanca version 2.9.1. + +### Getting Started on Ubuntu +As mentioned above, the Azure Storage Client Library for C++ depends on Casablanca. Follow [these instructions](https://github.com/Microsoft/cpprestsdk/wiki/How-to-build-for-Linux) to compile it. Once this is complete, then: @@ -99,19 +156,19 @@ git clone https://github.com/Azure/azure-storage-cpp.git The project is cloned to a folder called `azure-storage-cpp`. Always use the master branch, which contains the latest release. - Install additional dependencies: ```bash -sudo apt-get install libxml++2.6-dev libxml++2.6-doc uuid-dev +sudo apt-get install libxml2-dev uuid-dev ``` - Build the SDK for Release: ```bash cd azure-storage-cpp/Microsoft.WindowsAzure.Storage mkdir build.release cd build.release -CASABLANCA_DIR= CXX=g++-4.8 cmake .. -DCMAKE_BUILD_TYPE=Release +CASABLANCA_DIR= CXX=g++-5.1 cmake .. -DCMAKE_BUILD_TYPE=Release make ``` In the above command, replace `` to point to your local installation of Casablanca. For example, if the file `libcpprest.so` exists at location `~/Github/Casablanca/cpprestsdk/Release/build.release/Binaries/libcpprest.so`, then your `cmake` command should be: ```bash -CASABLANCA_DIR=~/Github/Casablanca/cpprestsdk CXX=g++-4.8 cmake .. -DCMAKE_BUILD_TYPE=Release +CASABLANCA_DIR=~/Github/Casablanca/cpprestsdk CXX=g++-5.1 cmake .. -DCMAKE_BUILD_TYPE=Release ``` The library is generated under `azure-storage-cpp/Microsoft.WindowsAzure.Storage/build.release/Binaries/`. @@ -120,9 +177,106 @@ To build and run unit tests: ```bash sudo apt-get install libunittest++-dev ``` + +- Build the test code: +```bash +CASABLANCA_DIR= CXX=g++-5.1 cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTS=ON +make +``` +- Run unit tests +```bash +cd Binaries +vi test_configurations.json # modify test config file to include your storage account credentials +./azurestoragetest +``` + +To build sample code: +```bash +vi ../samples/SamplesCommon/samples_common.h # modify connection string to include your storage account credentials +CASABLANCA_DIR= CXX=g++-5.1 cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_SAMPLES=ON +make +``` +To run the samples: +```bash +cd Binaries +./azurestoragesample +``` + +Please note the current build script is only tested on Ubuntu 16.04. Please update the script accordingly for other distributions. + +Please note that starting from 2.10.0, Casablanca requires a minimum version of CMake v3.1, so the default CMake on Ubuntu 14.04 cannot support Casablanca build. User can upgrade CMake by themselves to build Casablanca. If default CMake (2.8) for Ubuntu 14.04 must be used, 5.0.1 with Casablanca version v2.9.1 is recommended. + +### Getting Started on SLES12 + +*Please note the following build script is only tested on SLES12 SP3. The script may need to be updated accordingly for other distributions.* + +Before building the Azure Storage Client Library on C++, some prerequisites need to be installed first: + +- Install prerequisites: +```bash +sudo zypper install git gcc-c++ boost-devel cmake libopenssl-devel libxml2-devel libuuid-devel +``` + +The Azure Storage Client Library for C++ depends on Casablanca, following are instructions to build and install Casablanca: + +- Clone the project using git: +```bash +git clone https://github.com/Microsoft/cpprestsdk.git +``` + +- Checkout the version on which Azure Storage Client Library for C++ depends: +```bash +git checkout tags/v2.10.18 -b v2.10.18 +``` + +- Build the project in Release mode +```bash +cd cpprestsdk/Release +git submodule update --init +mkdir build.release +cd build.release +CXX=g++-5.1 cmake .. -DCMAKE_BUILD_TYPE=Release -DWERROR=OFF -DBUILD_SAMPLES=OFF -DBUILD_TESTS=OFF +sudo make install +``` + +To build the Azure Storage Client Library for C++ project: + +- Clone the project using git: +```bash +git clone https://github.com/Azure/azure-storage-cpp.git +``` +The project is cloned to a folder called `azure-storage-cpp`. Always use the master branch, which contains the latest release. + +- Build the SDK in Release mode: +```bash +cd azure-storage-cpp/Microsoft.WindowsAzure.Storage +mkdir build.release +cd build.release +CXX=g++-5.1 cmake .. -DCMAKE_BUILD_TYPE=Release +make +``` + +The library is generated under `azure-storage-cpp/Microsoft.WindowsAzure.Storage/build.release/Binaries/`. + +The Azure Storage Client Library for C++ project depends on Unitest++ for unit test: + +To build and install Unitest++: +- Clone the project using git: +```bash +git clone https://github.com/unittest-cpp/unittest-cpp.git +``` + +- Build and install the project: +```bash +cd unittest-cpp/builds/ +CXX=g++-5.1 cmake .. +sudo make install +``` + +Build and run unit test against Azure Storage Client Library for C++: - Build the test code: ```bash -CASABLANCA_DIR= CXX=g++-4.8 cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTS=ON +CXX=g++-5.1 cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTS=ON make ``` - Run unit tests @@ -134,29 +288,132 @@ vi test_configurations.json # modify test config file to include your storage ac To build sample code: ```bash -CASABLANCA_DIR= CXX=g++-4.8 cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_SAMPLES=ON +vi ../samples/SamplesCommon/samples_common.h # modify connection string to include your storage account credentials +CXX=g++-5.1 cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_SAMPLES=ON make ``` To run the samples: ```bash cd Binaries +./azurestoragesample +``` + +### Getting Started on CentOS 6/7 + +*Please note the following build script is only tested on CentOS 6.10 and 7.6. The script may need to be updated accordingly for other distributions.* + +Before building the Azure Storage Client Library on C++, some prerequisites need to be installed first: +- Install prerequisites: +```bash +sudo yum install epel-release centos-release-scl +sudo yum install git cmake3 make openssl-devel libxml2-devel libuuid-devel +``` + +- Install and enable to use gcc-c++. Note that `devtoolset-4` may be not available on some platforms, you can choose to install whichever newer than that, like `devtoolset-8`. +```bash +sudo yum install devtoolset-4-gcc-c++ +scl enable devtoolset-4 bash +``` + +- Download and install boost +```bash +wget https://dl.bintray.com/boostorg/release/1.68.0/source/boost_1_68_0.tar.gz +tar xvf boost_1_68_0.tar.gz +cd boost_1_68_0 +./bootstrap.sh +sudo ./b2 install +``` + +The Azure Storage Client Library for C++ depends on Casablanca, following are instructions to build and install Casablanca: + +- Clone the project using git: +```bash +git clone https://github.com/Microsoft/cpprestsdk.git +``` + +- Checkout the version on which Azure Storage Client Library for C++ depends: +```bash +cd cpprestsdk +git checkout tags/v2.10.18 -b v2.10.18 +``` + +- Build the project in Release mode +```bash +git submodule update --init +mkdir Release/build.release +cd Release/build.release +cmake3 .. -DCMAKE_BUILD_TYPE=Release -DWERROR=OFF -DBUILD_SAMPLES=OFF -DBUILD_TESTS=OFF +sudo make install +``` + +To build the Azure Storage Client Library for C++ project: + +- Clone the project using git: +```bash +git clone https://github.com/Azure/azure-storage-cpp.git +``` +The project is cloned to a folder called `azure-storage-cpp`. Always use the master branch, which contains the latest release. + +- Build the SDK in Release mode: +```bash +cd azure-storage-cpp/Microsoft.WindowsAzure.Storage +mkdir build.release +cd build.release +cmake3 .. -DCMAKE_BUILD_TYPE=Release +make +``` + +The library is generated under `azure-storage-cpp/Microsoft.WindowsAzure.Storage/build.release/Binaries/`. + +The Azure Storage Client Library for C++ project depends on Unitest++ for unit test: + +To build and install Unitest++: +- Clone the project using git: +```bash +git clone https://github.com/unittest-cpp/unittest-cpp.git +``` + +- Build and install the project: +```bash +cd unittest-cpp/builds/ +cmake3 .. +sudo make install +``` + +Build and run unit test against Azure Storage Client Library for C++: +- Build the test code: +```bash +cmake3 .. -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTS=ON +make +``` +- Run unit tests +```bash +cd Binaries +vi test_configurations.json # modify test config file to include your storage account credentials +./azurestoragetest +``` + +- To build sample code: +```bash vi ../samples/SamplesCommon/samples_common.h # modify connection string to include your storage account credentials -./samplesblobs # run the blobs sample -./samplesjson # run the tables sample with JSON payload -./samplestables # run the tables sample -./samplesqueues # run the queues sample +cmake3 .. -DCMAKE_BUILD_TYPE=Release -DBUILD_SAMPLES=ON +make ``` -Please note the current build script is only tested on Ubuntu 14.04. Please update the script accordingly for other distributions. +- To run the samples: +```bash +cd Binaries +./azurestoragesample +``` -## Getting Started on OSX +## Getting Started on macOS -*Note that OSX is not officially supported yet, but it has been seen to work, YMMV. This build has been tested to work when the dependencies are installed via homebrew, YMMV if using FINK or MacPorts* +*Note that macOS is not officially supported yet, but it has been seen to work, YMMV. This build has been tested to work when the dependencies are installed via homebrew, YMMV if using FINK or MacPorts* Install dependecies with homebrew: ``` -brew install libxml++ ossp-uuid openssl +brew install libxml2 ossp-uuid openssl ``` As mentioned above, the Azure Storage Client Library for C++ depends on Casablanca. @@ -165,7 +422,7 @@ If you are using homebrew you can install it from there: brew install cpprestsdk ``` -Otherwise, you may need to build it. Follow [these instructions](https://github.com/Microsoft/cpprestsdk/wiki/How-to-build-for-Mac-OS-X) to compile it. Current version of the library depends on Casablanca version 2.9.1. +Otherwise, you may need to build it. Follow [these instructions](https://github.com/Microsoft/cpprestsdk/wiki/How-to-build-for-Mac-OS-X) to compile it. Once this is complete, then: @@ -177,7 +434,7 @@ The project is cloned to a folder called `azure-storage-cpp`. Always use the mas **Some notes about building**: - If you're using homebrew, there seems to be an issue with the pkg-config files, which means that, by default, a -L flag to tell the linker where libintl lives is left out. We've accounted for this in our CMAKE file, by looking in the usual directory that homebrew puts those libs. If you are not using homebrew, you will get an error stating that you need to tell us where those libs live. -- Similarly, for openssl, you don't want to use the version that comes with OSX, it is old. We've accounted for this in the CMAKE script by setting the search paths to where homebrew puts openssl, so if you're not using homebrew you'll need to tell us where a more recent version of openssl lives. +- Similarly, for openssl, you don't want to use the version that comes with macOS, it is old. We've accounted for this in the CMAKE script by setting the search paths to where homebrew puts openssl, so if you're not using homebrew you'll need to tell us where a more recent version of openssl lives. - Build the SDK for Release if you are using hombrew: ```bash @@ -200,7 +457,7 @@ make In the above command, replace: - `` to point to your local installation of Casablanca. For example, if the file `libcpprest.so` exists at location `~/Github/Casablanca/cpprestsdk/Release/build.release/Binaries/libcpprest.dylib`, then should be `~/Github/Casablanca/cpprestsdk` -- `` to your local openssl, it is recommended not to use the version that comes with OSX, rather use one from Homebrew or the like. This should be the path that contains the `lib` and `include` directories +- `` to your local openssl, it is recommended not to use the version that comes with macOS, rather use one from Homebrew or the like. This should be the path that contains the `lib` and `include` directories - `` is the directory which contains `libintl.dylib` For example you might use: diff --git a/SECURITY.md b/SECURITY.md new file mode 100644 index 00000000..e138ec5d --- /dev/null +++ b/SECURITY.md @@ -0,0 +1,41 @@ + + +## Security + +Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/). + +If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below. + +## Reporting Security Issues + +**Please do not report security vulnerabilities through public GitHub issues.** + +Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report). + +If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey). + +You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc). + +Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue: + + * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.) + * Full paths of source file(s) related to the manifestation of the issue + * The location of the affected source code (tag/branch/commit or direct URL) + * Any special configuration required to reproduce the issue + * Step-by-step instructions to reproduce the issue + * Proof-of-concept or exploit code (if possible) + * Impact of the issue, including how an attacker might exploit the issue + +This information will help us triage your report more quickly. + +If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs. + +## Preferred Languages + +We prefer all communications to be in English. + +## Policy + +Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd). + + diff --git a/azure-pipelines.yml b/azure-pipelines.yml new file mode 100644 index 00000000..6e7eac60 --- /dev/null +++ b/azure-pipelines.yml @@ -0,0 +1,125 @@ +trigger: +- master +- dev + +pr: +- dev + +variables: + cpp_rest_sdk_version: 2.10.16 + +jobs: +- job: build_test_linux + displayName: Build and Test on Linux + timeoutInMinutes: 240 + + strategy: + maxParallel: 16 + matrix: + UBUNTU1404: + container_image: ubuntu14.04:cpprestsdk_$(cpp_rest_sdk_version) + build_type: Release + UBUNTU1604: + container_image: ubuntu16.04:cpprestsdk_$(cpp_rest_sdk_version) + build_type: Release + UBUNTU1804: + container_image: ubuntu18.04:cpprestsdk_$(cpp_rest_sdk_version) + build_type: Release + UBUNTU1804_DEBUG: + container_image: ubuntu18.04:cpprestsdk_$(cpp_rest_sdk_version) + build_type: Debug + UBUNTU1804_I686: + container_image: ubuntu18.04.i686:cpprestsdk_$(cpp_rest_sdk_version) + build_type: Release + build_env_init: export CXXFLAGS=-m32 + CENTOS7_DEBUG: + container_image: centos7:cpprestsdk_$(cpp_rest_sdk_version) + build_type: Debug + build_env_init: source scl_source enable devtoolset-4 + + pool: + vmImage: 'ubuntu-16.04' + + container: + image: azurecppsdkpipeline.azurecr.io/$(container_image) + endpoint: azure_docker_registry_connection + + steps: + - script: | + $(build_env_init) + cmake Microsoft.WindowsAzure.Storage/CMakeLists.txt -B$(Build.BinariesDirectory) -DCMAKE_BUILD_TYPE=$(build_type) -DBUILD_SAMPLES=ON -DBUILD_TESTS=ON + cmake --build $(Build.BinariesDirectory) -- -j$(nproc) + displayName: Build + + - script: echo ${MAPPED_TEST_CONFIGURATION} > $(Build.BinariesDirectory)/Binaries/test_configurations.json + displayName: Copy Test Configuration + env: + MAPPED_TEST_CONFIGURATION: $(test_configuration) + + - script: ./azurestoragetest $(excluded_testcases) $(retry_policy) --warning-message='##vso[task.logissue type=warning]' + workingDirectory: $(Build.BinariesDirectory)/Binaries + displayName: Run Tests + + +- job: build_test_windows + displayName: Build and Test on Windows + timeoutInMinutes: 300 + + variables: + - name: project_file + value: Microsoft.WindowsAzure.Storage\tests\Microsoft.WindowsAzure.Storage.UnitTests.v141.vcxproj + + strategy: + maxParallel: 16 + matrix: + VS2017: + vm_image: vs2017-win2016 + platform: x64 + configuration: Release + toolset: v141 + VS2017_DEBUG: + vm_image: vs2017-win2016 + platform: x64 + configuration: Debug + toolset: v141 + VS2017_WIN32: + vm_image: vs2017-win2016 + platform: Win32 + configuration: Release + toolset: v141 + VS2019: + vm_image: windows-2019 + platform: x64 + configuration: Release + toolset: v142 + + pool: + vmImage: $(vm_image) + + steps: + - powershell: | + Invoke-WebRequest -Uri https://codeload.github.com/microsoft/vcpkg/zip/master -OutFile vcpkg-master.zip + Add-Type -AssemblyName System.IO.Compression.FileSystem + [System.IO.Compression.ZipFile]::ExtractToDirectory("vcpkg-master.zip", "C:\") + cd C:\vcpkg-master + .\bootstrap-vcpkg.bat + .\vcpkg install cpprestsdk[core]:x86-windows cpprestsdk[core]:x64-windows unittest-cpp:x86-windows unittest-cpp:x64-windows + .\vcpkg integrate install + displayName: Install Dependencies + + - task: MSBUILD@1 + displayName: Build + inputs: + solution: $(project_file) + msbuildArguments: /p:OutDir=$(Build.BinariesDirectory)\ /p:PlatformToolset=$(toolset) + platform: $(platform) + configuration: $(configuration) + + - script: echo %MAPPED_TEST_CONFIGURATION% > $(Build.BinariesDirectory)\test_configurations.json + displayName: Copy Test Configuration + env: + MAPPED_TEST_CONFIGURATION: $(test_configuration) + + - script: wastoretest.exe $(excluded_testcases) $(retry_policy) --warning-message="##vso[task.logissue type=warning]" + workingDirectory: $(Build.BinariesDirectory) + displayName: Run Tests diff --git a/build.proj b/build.proj index 2c49f3be..580c6ea9 100644 --- a/build.proj +++ b/build.proj @@ -17,10 +17,6 @@ Configuration=Release;Platform=x64 - - - - @@ -31,7 +27,7 @@ BuildInParallel="true" /> - + - - - - - - - - - - \ No newline at end of file diff --git a/tools/NuGet.exe b/tools/NuGet.exe deleted file mode 100644 index 324daa84..00000000 Binary files a/tools/NuGet.exe and /dev/null differ diff --git a/tools/copy_Signed.bat b/tools/copy_Signed.bat deleted file mode 100644 index cab89eb8..00000000 --- a/tools/copy_Signed.bat +++ /dev/null @@ -1,27 +0,0 @@ -@echo off - -pushd %~dp0 - -if NOT EXIST %1\Signed echo %1\Signed not exists && goto copyfailed - -echo Copying signed DLLs and the pdbs to the final drop location... -copy /y %1\Signed\v140_Win32_Debug_wastorage.dll ..\Microsoft.WindowsAzure.Storage\v140\Win32\Debug\wastorage.dll -copy /y %1\Signed\v140_x64_Debug_wastorage.dll ..\Microsoft.WindowsAzure.Storage\v140\x64\Debug\wastorage.dll -copy /y %1\Signed\v140_Win32_Release_wastorage.dll ..\Microsoft.WindowsAzure.Storage\v140\Win32\Release\wastorage.dll -copy /y %1\Signed\v140_x64_Release_wastorage.dll ..\Microsoft.WindowsAzure.Storage\v140\x64\Release\wastorage.dll -copy /y %1\Signed\v120_Win32_Debug_wastorage.dll ..\Microsoft.WindowsAzure.Storage\v120\Win32\Debug\wastorage.dll -copy /y %1\Signed\v120_x64_Debug_wastorage.dll ..\Microsoft.WindowsAzure.Storage\v120\x64\Debug\wastorage.dll -copy /y %1\Signed\v120_Win32_Release_wastorage.dll ..\Microsoft.WindowsAzure.Storage\v120\Win32\Release\wastorage.dll -copy /y %1\Signed\v120_x64_Release_wastorage.dll ..\Microsoft.WindowsAzure.Storage\v120\x64\Release\wastorage.dll -if %ERRORLEVEL% neq 0 goto copyfailed -echo OK - -popd -exit /b 0 - -:copyfailed - -echo FAILED. Unable to copy DLLs - -popd -exit /b -1 \ No newline at end of file diff --git a/tools/copy_ToSign.bat b/tools/copy_ToSign.bat deleted file mode 100644 index e3484888..00000000 --- a/tools/copy_ToSign.bat +++ /dev/null @@ -1,30 +0,0 @@ -@echo off - -pushd %~dp0 - -echo Clean up the existing ToSign folder -if EXIST %1\ToSign rmdir /s /q %1\ToSign -mkdir %1\ToSign -if %ERRORLEVEL% neq 0 goto copyfailed -echo OK - -echo Copying DLLs to signing source directory -copy /y ..\Microsoft.WindowsAzure.Storage\v140\Win32\Debug\wastorage.dll %1\ToSign\v140_Win32_Debug_wastorage.dll -copy /y ..\Microsoft.WindowsAzure.Storage\v140\x64\Debug\wastorage.dll %1\ToSign\v140_x64_Debug_wastorage.dll -copy /y ..\Microsoft.WindowsAzure.Storage\v140\Win32\Release\wastorage.dll %1\ToSign\v140_Win32_Release_wastorage.dll -copy /y ..\Microsoft.WindowsAzure.Storage\v140\x64\Release\wastorage.dll %1\ToSign\v140_x64_Release_wastorage.dll -copy /y ..\Microsoft.WindowsAzure.Storage\v120\Win32\Debug\wastorage.dll %1\ToSign\v120_Win32_Debug_wastorage.dll -copy /y ..\Microsoft.WindowsAzure.Storage\v120\x64\Debug\wastorage.dll %1\ToSign\v120_x64_Debug_wastorage.dll -copy /y ..\Microsoft.WindowsAzure.Storage\v120\Win32\Release\wastorage.dll %1\ToSign\v120_Win32_Release_wastorage.dll -copy /y ..\Microsoft.WindowsAzure.Storage\v120\x64\Release\wastorage.dll %1\ToSign\v120_x64_Release_wastorage.dll -if %ERRORLEVEL% neq 0 goto copyfailed -echo OK - -popd -exit /b 0 - -:copyfailed - -echo FAILED. Unable to copy DLLs -popd -exit /b -1 \ No newline at end of file diff --git a/wastorage.nuspec b/wastorage.nuspec deleted file mode 100644 index a0afd94d..00000000 --- a/wastorage.nuspec +++ /dev/null @@ -1,23 +0,0 @@ - - - - wastorage - 2.6.0 - Microsoft Azure Storage Client Library for C++ - Microsoft Corporation - Microsoft Corporation - http://go.microsoft.com/fwlink/?LinkId=235170 - http://go.microsoft.com/fwlink/?LinkId=235168 - http://go.microsoft.com/fwlink/?LinkID=288890 - A client library for working with Microsoft Azure storage services including blobs, files, tables, and queues. - This client library enables working with the Microsoft Azure storage services which include the blob service for storing binary and text data, the file service for storing binary and text data, the table service for storing structured non-relational data, and the queue service for storing messages that may be accessed by a client. Microsoft Azure Storage team's blog - http://blogs.msdn.com/b/windowsazurestorage/ - Public release - Microsoft Azure Storage Table Blob Queue File Scalable windowsazureofficial - - - - - - - - diff --git a/wastorage.v120.nuspec b/wastorage.v120.nuspec deleted file mode 100644 index 3f13a176..00000000 --- a/wastorage.v120.nuspec +++ /dev/null @@ -1,39 +0,0 @@ - - - - wastorage.v120 - 2.6.0 - Microsoft Azure Storage Client Library for C++ - Microsoft Corporation - Microsoft Corporation - http://go.microsoft.com/fwlink/?LinkId=235170 - http://go.microsoft.com/fwlink/?LinkId=235168 - http://go.microsoft.com/fwlink/?LinkID=288890 - A client library for working with Microsoft Azure storage services including blobs, files, tables, and queues. - This client library enables working with the Microsoft Azure storage services which include the blob service for storing binary and text data, the file service for storing binary and text data, the table service for storing structured non-relational data, and the queue service for storing messages that may be accessed by a client. Microsoft Azure Storage team's blog - http://blogs.msdn.com/b/windowsazurestorage/ - Public release - Microsoft Azure Storage Table Blob Queue File Scalable windowsazureofficial - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/wastorage.v120.targets b/wastorage.v120.targets deleted file mode 100644 index dbd4a8c9..00000000 --- a/wastorage.v120.targets +++ /dev/null @@ -1,35 +0,0 @@ - - - true - - - true - - - - $(MSBuildThisFileDirectory)..\..\lib\native\v120\x64\Debug\wastorage.lib;%(AdditionalDependencies) - $(MSBuildThisFileDirectory)..\..\lib\native\v120\x64\Release\wastorage.lib;%(AdditionalDependencies) - $(MSBuildThisFileDirectory)..\..\lib\native\v120\Win32\Debug\wastorage.lib;%(AdditionalDependencies) - $(MSBuildThisFileDirectory)..\..\lib\native\v120\Win32\Release\wastorage.lib;%(AdditionalDependencies) - - - $(MSBuildThisFileDirectory)include;%(AdditionalIncludeDirectories) - - - - - - - - - - - - - - - - - - - diff --git a/wastorage.v140.nuspec b/wastorage.v140.nuspec deleted file mode 100644 index 06dc44cb..00000000 --- a/wastorage.v140.nuspec +++ /dev/null @@ -1,39 +0,0 @@ - - - - wastorage.v140 - 2.6.0 - Microsoft Azure Storage Client Library for C++ - Microsoft Corporation - Microsoft Corporation - http://go.microsoft.com/fwlink/?LinkId=235170 - http://go.microsoft.com/fwlink/?LinkId=235168 - http://go.microsoft.com/fwlink/?LinkID=288890 - A client library for working with Microsoft Azure storage services including blobs, files, tables, and queues. - This client library enables working with the Microsoft Azure storage services which include the blob service for storing binary and text data, the file service for storing binary and text data, the table service for storing structured non-relational data, and the queue service for storing messages that may be accessed by a client. Microsoft Azure Storage team's blog - http://blogs.msdn.com/b/windowsazurestorage/ - Public release - Microsoft Azure Storage Table Blob Queue File Scalable windowsazureofficial - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/wastorage.v140.targets b/wastorage.v140.targets deleted file mode 100644 index cf1b5936..00000000 --- a/wastorage.v140.targets +++ /dev/null @@ -1,35 +0,0 @@ - - - true - - - true - - - - $(MSBuildThisFileDirectory)..\..\lib\native\v140\x64\Debug\wastorage.lib;%(AdditionalDependencies) - $(MSBuildThisFileDirectory)..\..\lib\native\v140\x64\Release\wastorage.lib;%(AdditionalDependencies) - $(MSBuildThisFileDirectory)..\..\lib\native\v140\Win32\Debug\wastorage.lib;%(AdditionalDependencies) - $(MSBuildThisFileDirectory)..\..\lib\native\v140\Win32\Release\wastorage.lib;%(AdditionalDependencies) - - - $(MSBuildThisFileDirectory)include;%(AdditionalIncludeDirectories) - - - - - - - - - - - - - - - - - - -