AWS Tools for Windows PowerShell v1.1.1

This release of the AWS Tools for Windows PowerShell contains enhancements to the pipelining capability of the tools, as well as other changes making them easier to use. Please note that some of these changes are of a breaking nature; this document details the changes that have been made.

Change Summary

Detailed Change Notes

1. Collection output from cmdlets is now always enumerated to the PowerShell pipeline

For service calls that return collections, the objects within the collection are now always enumerated to the pipeline. In the previous version, the collection object itself was emitted, which required the use of foreach {$_.getenumerator()} to continue pipelining.

Result objects that contain additional fields beyond the collection and which are not paging control fields have these fields added as Note properties for the calls. These Note properties are now logged in the new $AWSHistory session variable, should you need to access this data. The $AWSHistory variable is described in note (3) below.

See also note (2) below on how to control the amount of data returned to the pipe if you need that level of control for your usage scenario.

Because the collections now enumerate to the pipeline, when--on cmdlet exit--the output from a cmdlet is stored into a variable, that variable may be $null, a single object or a collection (with a .Count property indicating the size). However, the .Count property is not present when only a single object is emitted. If your script needs to determine, in a consistent way, how many objects were emitted, you can use the new EmittedObjectsCount property of the last command value in $AWSHistory (see note (3)). For example:

    $nullOneOrMoreObjects = Get-S3Object ... 
    if ($AWSHistory.LastCommand.EmittedObjectsCount -eq 0) 
    { 
        'No objects were returned; 
    } 
    else if ($AWSHistory.LastCommand.EmittedObjectsCount -eq 1) 
    { 
        'One object was returned' 
    } 
    else 
    { 
        'Multiple objects returned; `
        $nullOneOrMoreObjects.Count or $AWSHistory.LastCommand.EmittedObjectsCount `
        yields size' 
    }

2. Automatic Page-to-Completion for pageable service calls

For service APIs that impose a default maximum object return count for a given call or that support pageable result sets, the default behavior for all cmdlets is now to page-to-completion, making as many calls as necessary on your behalf to return the complete data set to the pipeline.

If you want to retain control of the amount of data returned, you can continue to use parameters on the individual cmdlets (e.g. MaxKeys on Get-S3Object) and/or you can explicitly handle paging yourself by using a combination of paging parameters on the cmdlets, and data placed in $AWSHistory variable to get the service's next token data. See note (3) for information about the $AWSHistory variable.

Using Get-S3Object as an example:

# on completion, $c contains S3Object instances for *every* key in the bucket test 
# (potentially a huge data set)
$c = Get-S3Object -BucketName test

# on completion, $c contains up to a maximum of 500 S3Object instances for the first 500 objects 
# found in the bucket. 
$c = Get-S3Object -BucketName test -MaxKeys 500

To know if more data was available but not returned, use the $AWSHistory session variable entry that recorded the calls the cmdlet made:

$AWSHistory.LastServiceResponse -ne $null && $AWSHistory.LastServiceResponse.IsTruncated

If this evaluates to $true, you can find the next marker for the next set of results using $AWSHistory.LastServiceResponse.NextMarker:

# on completion, $c contains up to a maximum of 500 S3Object instances for the next 500 objects 
# found in the bucket that start after the specified key prefix marker
$c = Get-S3Object -BucketName test -MaxKeys 500 -Marker $AWSHistory.LastServiceResponse.NextMarker

To manually control paging with Get-S3Object, use a combination of the MaxKeys and Marker parameters for the cmdlet and the IsTruncated/NextMarker notes on the last recorded response. This is the same as how paging was handled in the version 1.0.x releases.

3. New $AWSHistory shell variable

To better support pipelining, output from AWS cmdlets is no longer reshaped to include the service (SDK) response and result instances as Note properties on the emitted collection object. Instead, for those calls that emit a single collection as output, the collection is now enumerated to the PowerShell pipeline as described in (1) above. This means that the SDK response/result data cannot exist in the pipe as there is no containing collection object to which it can be attached.

Although most users probably won't need this data, it can be useful for diagnostic purposes as you can see exactly what was sent to and received from the underlying AWS service call(s) made by the cmdlet.

Starting with version 1.1 this data and more is now available in a new shell variable named $AWSHistory. This variable maintains a record of AWS cmdlet invocations and for each, the service responses that were received. Optionally, this history can be configured to also record the service requests that each cmdlet made. Additional useful data such as the overall execution time of the cmdlet can also be obtained from each entry.

The output reshaping that the AWS cmdlets performed in the 1.0.x releases and that was attached to the emitted data as Note properties can now be found on the response entries recorded for each invocation in the stack (a given cmdlet invocation can hold zero or more service request and response entries). To limit memory impact the $AWSHistory buffer only keeps a record of the last 5 cmdlet executions by default and for each, the last 5 service responses (and if enabled, last 5 service requests).

These default limits can be changed using the new Set-AWSHistoryConfiguration cmdlet. It allows you to both control the size of the recording buffers and whether service requests are also logged:

Set-AWSHistoryConfiguration -MaxCmdletHistory <value> -MaxServiceCallHistory <value> -RecordServiceRequests

The -MaxCmdletHistory parameter sets the maximum number of cmdlets that can be tracked at any time. A value of 0 turns off recording of AWS cmdlet activity. The -MaxServiceCallHistory parameter sets the maximum number of service responses (and/or requests) that are tracked for each cmdlet. The -RecordServiceRequests parameter, if specified, turns on tracking of service requests for each cmdlet. All parameters are optional.

If run with no parameters, Set-AWSHistoryConfiguration simply turns off any prior request recording, leaving the current stack sizes unchanged.

To clear all entries in the current history buffer, use the new Clear-AWSHistory cmdlet.

Examples:

# Enumerate the details of the AWS cmdlets that are being held in the history buffer to the pipeline:
PS C:\> $AWSHistory.Commands

# Access the details of the last AWS cmdlet that was run:
PS C:\> $AWSHistory.LastCommand

# Access the details of the last service response received by the last AWS cmdlet that was run. If 
# an AWS cmdlet is paging output, it may make multiple service calls to obtain either all data or 
# the maximum amount of data (determined by parameters on the cmdlet):
PS C:\> $AWSHistory.LastServiceResponse

# Access the details of the last request made (again, a cmdlet may make more than one request if it 
# is paging on the user's behalf per note (2) above). Yields $null unless service request tracing is 
# enabled:
PS C:\> $AWSHistory.LastServiceRequest

Each entry in the $AWSHistory.Commands buffer is of type AWSCmdletHistory. This type has the following useful members:

Note that the $AWSHistory variable is not created until an AWS cmdlet making a service call is used. It evaluates to $null until that point.

4. AWSRegion instances now use Region field instead of SystemName to allow pipelining

This allows pipeling of a collection of regions into downstream cmdlets, for example the following command can be used to return a collection of all your Amazon EC2 AMIs across all regions:

Get-AWSRegion | Get-EC2Image -Owner self

5. Remove-S3Bucket now supports a -DeleteObjects switch option

If specified, -DeleteObjects causes the cmdlet to first delete all objects and object versions in the specified bucket before attempting to remove the bucket itself. Depending on the number of objects/object versions in the bucket, this operation can take a substantial amount of time to complete. If -DeleteObjects is not specified, Remove-S3Bucket attempts to delete the bucket and will fail if the bucket contains objects.

Note that unless -Force is specified, you will be prompted for confirmation before the cmdlet runs (see note (10) below).

6. Fixed usability issue with Set-AWSCredentials

In this preview release, the cmdlet Set-AWSCredentials has changed so that the only positional parameter is -StoredCredentials allowing

Set-AWSCredentials credentialSet 

to behave the same as

Set-AWSCredentials -StoredCredentials credentialSet.

In the current released version of the tools, leaving out -StoredCredentials and specifying only the profile name of a set of credentials to Set-AWSCredentials causes those credentials to be overwritten with the current default credentials. For example, if the current shell is using credentials stored in the profile named credentialSet1, then the command

Set-AWSCredentials credentialSet2

would overwrite credentialSet2 with the data for credentialSet1, instead of loading the credentials for credentialSet2 and storing them as the new shell defaults.

The workaround for this has been to use the explicit -StoredCredentials parameter:

Set-AWSCredentials -StoredCredentials credentialSet2

7. Initialize-AWSDefaults now reports where it obtained credentials and region data from for the current shell

This provides more confidence in what credentials are in use when the shell starts. In addition, by using changes to the $StoredAWSCredentials and $StoredAWSRegion shell variables, you can now construct a prompt string that echoes the store name of the credentials in use along with current region and path:

# yields a prompt similar to 'mycredentialsname@us-west-2 C:\users\userid\documents'
function prompt 
{
    $prompt = "";
    if ($StoredAWSCredentials -ne $null) 
    {
        $prompt = "$StoredAWSCredentials"
        if (!$prompt.EndsWith("@")) { $prompt += "@" }
    }
    else { $prompt = "PS " }
		
    if ($StoredAWSRegion -ne $null) { $prompt += "$StoredAWSRegion" }
    $prompt += " $PWD> "
    $prompt
}

8. Stop-EC2Instance now accepts Amazon.EC2.Model.Reservation instances

In addition to the existing string input parameters instance id and RunningInstance, Stop-EC2Instance now also accepts a Reservation instance. If supplied, all running instances in the reservation are processed for stop/terminate operations. This makes it possible to stop or terminate all of your Amazon EC2 instances in a region with a simple piped command:

Get-EC2Instance | Stop-EC2Instance

Note that when the -Terminate switch is specified, the Stop-EC2Instance cmdlet prompts for confirmation before proceeding, unless -Force is specified (see note (10) below). The existing -Force switch that was used in non-terminate mode, to force the instance(s) to stop, has been renamed to -ForceStop to avoid conflict with the standard PowerShell switch used to bypass confirmation prompts.

9. Generic List<T> parameter types replaced with array types (T[])

Cmdlets that previously declared a parameter type of generic list (e.g. List<Amazon.EC2.Model.Filter> have been changed to instead declare the type as an array (e.g. Amazon.EC2.Model.Filter[]) to better conform to PowerShell convention. The new parameter type makes using inline arrays much simpler allowing for cleaner scripts and interactive use, avoiding the need to use new-object to construct a List<T>.

An exception to this is for Amazon SQS cmdlets that previously accepted a List collection; these have been changed to accept a hash table of string key/value pairs to make command line usage easier, for example:

New-SQSQueue -QueueName "myqueue" -Attribute @{MessageRetentionPeriod="60"; MaximumMessageSize="1024"}

10. Cmdlets that delete or terminate resources now prompt for confirmation prior to deletion

All AWS cmdlets that use the Remove verb, and the Stop-EC2Instance cmdlet when used with the -Terminate switch, now prompt for confirmation before proceeding. To bypass confirmation, use the -Force switch.

Note that AWS cmdlets do not support the -WhatIf switch.

11. Write-S3Object now supports in-line text content to upload to Amazon S3

Using the new -Content parameter (alias -Text), you can specify text-based content that should be uploaded to Amazon S3 without needing to place it into a file first. The parameter accepts simple one-line strings as well as here strings containing multiple lines. The cmdlet writes the content to a temporary file, uploads the file and then deletes the temporary file on completion.

Example usage:

# Specifying content in-line, single line text:
write-s3object mybucket -key myobject.txt -content "file content"

# Specifying content in-line, multi-line text: (note final newline needed to end in-line here-string)
write-s3object mybucket -key myobject.txt -content @" 
>> line 1 
>> line 2 
>> line 3 
>> "@ 
>> 

# Specifying content from a variable: (note final newline needed to end in-line here-string) 
$x = @" 
>> line 1 
>> line 2 
>> line 3 
>> "@ 
>> 
write-s3object mybucket -key myobject.txt -content $x

12. Read/Write-S3Object now support operations against the bucket root.

(Added in release 1.1.0.1) Read-S3Object and Write-S3Object have been extended to allow the use of '/' or '\' to indicate the content being downloaded/uploaded is at the root of the bucket. For example, to upload a local folder hierarchy to a new bucket you could specify the following:

New-S3Bucket uniqueBucketName | Write-S3Object -KeyPrefix / -Folder C:\local\path -Recurse

To retrieve the entire bucket content, you would use:

Read-S3Object -KeyPrefix / -Folder C:\local\otherpath

Additionally, the two cmdlets now report a summary of how many objects were up- or downloaded on exit.