aws_sdk_s3/operation/put_bucket_lifecycle_configuration/
builders.rs

1// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
2pub use crate::operation::put_bucket_lifecycle_configuration::_put_bucket_lifecycle_configuration_output::PutBucketLifecycleConfigurationOutputBuilder;
3
4pub use crate::operation::put_bucket_lifecycle_configuration::_put_bucket_lifecycle_configuration_input::PutBucketLifecycleConfigurationInputBuilder;
5
6impl crate::operation::put_bucket_lifecycle_configuration::builders::PutBucketLifecycleConfigurationInputBuilder {
7    /// Sends a request with this input using the given client.
8    pub async fn send_with(
9        self,
10        client: &crate::Client,
11    ) -> ::std::result::Result<
12        crate::operation::put_bucket_lifecycle_configuration::PutBucketLifecycleConfigurationOutput,
13        ::aws_smithy_runtime_api::client::result::SdkError<
14            crate::operation::put_bucket_lifecycle_configuration::PutBucketLifecycleConfigurationError,
15            ::aws_smithy_runtime_api::client::orchestrator::HttpResponse,
16        >,
17    > {
18        let mut fluent_builder = client.put_bucket_lifecycle_configuration();
19        fluent_builder.inner = self;
20        fluent_builder.send().await
21    }
22}
23/// Fluent builder constructing a request to `PutBucketLifecycleConfiguration`.
24///
25/// <p>Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle configuration. Keep in mind that this will overwrite an existing lifecycle configuration, so if you want to retain any configuration details, they must be included in the new lifecycle configuration. For information about lifecycle configuration, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html">Managing your storage lifecycle</a>.</p><note>
26/// <p>Bucket lifecycle configuration now supports specifying a lifecycle rule using an object key name prefix, one or more object tags, object size, or any combination of these. Accordingly, this section describes the latest API. The previous version of the API supported filtering based only on an object key name prefix, which is supported for backward compatibility. For the related API description, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycle.html">PutBucketLifecycle</a>.</p>
27/// </note>
28/// <dl>
29/// <dt>
30/// Rules
31/// </dt>
32/// <dt>
33/// Permissions
34/// </dt>
35/// <dt>
36/// HTTP Host header syntax
37/// </dt>
38/// <dd>
39/// <p>You specify the lifecycle configuration in your request body. The lifecycle configuration is specified as XML consisting of one or more rules. An Amazon S3 Lifecycle configuration can have up to 1,000 rules. This limit is not adjustable.</p>
40/// <p>Bucket lifecycle configuration supports specifying a lifecycle rule using an object key name prefix, one or more object tags, object size, or any combination of these. Accordingly, this section describes the latest API. The previous version of the API supported filtering based only on an object key name prefix, which is supported for backward compatibility for general purpose buckets. For the related API description, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycle.html">PutBucketLifecycle</a>.</p><note>
41/// <p>Lifecyle configurations for directory buckets only support expiring objects and cancelling multipart uploads. Expiring of versioned objects,transitions and tag filters are not supported.</p>
42/// </note>
43/// <p>A lifecycle rule consists of the following:</p>
44/// <ul>
45/// <li>
46/// <p>A filter identifying a subset of objects to which the rule applies. The filter can be based on a key name prefix, object tags, object size, or any combination of these.</p></li>
47/// <li>
48/// <p>A status indicating whether the rule is in effect.</p></li>
49/// <li>
50/// <p>One or more lifecycle transition and expiration actions that you want Amazon S3 to perform on the objects identified by the filter. If the state of your bucket is versioning-enabled or versioning-suspended, you can have many versions of the same object (one current version and zero or more noncurrent versions). Amazon S3 provides predefined actions that you can specify for current and noncurrent object versions.</p></li>
51/// </ul>
52/// <p>For more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html">Object Lifecycle Management</a> and <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html">Lifecycle Configuration Elements</a>.</p>
53/// </dd>
54/// <dd>
55/// <ul>
56/// <li>
57/// <p><b>General purpose bucket permissions</b> - By default, all Amazon S3 resources are private, including buckets, objects, and related subresources (for example, lifecycle configuration and website configuration). Only the resource owner (that is, the Amazon Web Services account that created it) can access the resource. The resource owner can optionally grant access permissions to others by writing an access policy. For this operation, a user must have the <code>s3:PutLifecycleConfiguration</code> permission.</p>
58/// <p>You can also explicitly deny permissions. An explicit deny also supersedes any other permissions. If you want to block users or accounts from removing or deleting objects from your bucket, you must deny them permissions for the following actions:</p>
59/// <ul>
60/// <li>
61/// <p><code>s3:DeleteObject</code></p></li>
62/// <li>
63/// <p><code>s3:DeleteObjectVersion</code></p></li>
64/// <li>
65/// <p><code>s3:PutLifecycleConfiguration</code></p>
66/// <p>For more information about permissions, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-access-control.html">Managing Access Permissions to Your Amazon S3 Resources</a>.</p></li>
67/// </ul></li>
68/// </ul>
69/// <ul>
70/// <li>
71/// <p><b>Directory bucket permissions</b> - You must have the <code>s3express:PutLifecycleConfiguration</code> permission in an IAM identity-based policy to use this operation. Cross-account access to this API operation isn't supported. The resource owner can optionally grant access permissions to others by creating a role or user for them as long as they are within the same account as the owner and resource.</p>
72/// <p>For more information about directory bucket policies and permissions, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam.html">Authorizing Regional endpoint APIs with IAM</a> in the <i>Amazon S3 User Guide</i>.</p><note>
73/// <p><b>Directory buckets </b> - For directory buckets, you must make requests for this API operation to the Regional endpoint. These endpoints support path-style requests in the format <code>https://s3express-control.<i>region-code</i>.amazonaws.com/<i>bucket-name</i> </code>. Virtual-hosted-style requests aren't supported. For more information about endpoints in Availability Zones, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/endpoint-directory-buckets-AZ.html">Regional and Zonal endpoints for directory buckets in Availability Zones</a> in the <i>Amazon S3 User Guide</i>. For more information about endpoints in Local Zones, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-lzs-for-directory-buckets.html">Concepts for directory buckets in Local Zones</a> in the <i>Amazon S3 User Guide</i>.</p>
74/// </note></li>
75/// </ul>
76/// </dd>
77/// <dd>
78/// <p><b>Directory buckets </b> - The HTTP Host header syntax is <code>s3express-control.<i>region</i>.amazonaws.com</code>.</p>
79/// <p>The following operations are related to <code>PutBucketLifecycleConfiguration</code>:</p>
80/// <ul>
81/// <li>
82/// <p><a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLifecycleConfiguration.html">GetBucketLifecycleConfiguration</a></p></li>
83/// <li>
84/// <p><a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketLifecycle.html">DeleteBucketLifecycle</a></p></li>
85/// </ul>
86/// </dd>
87/// </dl>
88#[derive(::std::clone::Clone, ::std::fmt::Debug)]
89pub struct PutBucketLifecycleConfigurationFluentBuilder {
90    handle: ::std::sync::Arc<crate::client::Handle>,
91    inner: crate::operation::put_bucket_lifecycle_configuration::builders::PutBucketLifecycleConfigurationInputBuilder,
92    config_override: ::std::option::Option<crate::config::Builder>,
93}
94impl
95    crate::client::customize::internal::CustomizableSend<
96        crate::operation::put_bucket_lifecycle_configuration::PutBucketLifecycleConfigurationOutput,
97        crate::operation::put_bucket_lifecycle_configuration::PutBucketLifecycleConfigurationError,
98    > for PutBucketLifecycleConfigurationFluentBuilder
99{
100    fn send(
101        self,
102        config_override: crate::config::Builder,
103    ) -> crate::client::customize::internal::BoxFuture<
104        crate::client::customize::internal::SendResult<
105            crate::operation::put_bucket_lifecycle_configuration::PutBucketLifecycleConfigurationOutput,
106            crate::operation::put_bucket_lifecycle_configuration::PutBucketLifecycleConfigurationError,
107        >,
108    > {
109        ::std::boxed::Box::pin(async move { self.config_override(config_override).send().await })
110    }
111}
112impl PutBucketLifecycleConfigurationFluentBuilder {
113    /// Creates a new `PutBucketLifecycleConfigurationFluentBuilder`.
114    pub(crate) fn new(handle: ::std::sync::Arc<crate::client::Handle>) -> Self {
115        Self {
116            handle,
117            inner: ::std::default::Default::default(),
118            config_override: ::std::option::Option::None,
119        }
120    }
121    /// Access the PutBucketLifecycleConfiguration as a reference.
122    pub fn as_input(&self) -> &crate::operation::put_bucket_lifecycle_configuration::builders::PutBucketLifecycleConfigurationInputBuilder {
123        &self.inner
124    }
125    /// Sends the request and returns the response.
126    ///
127    /// If an error occurs, an `SdkError` will be returned with additional details that
128    /// can be matched against.
129    ///
130    /// By default, any retryable failures will be retried twice. Retry behavior
131    /// is configurable with the [RetryConfig](aws_smithy_types::retry::RetryConfig), which can be
132    /// set when configuring the client.
133    pub async fn send(
134        self,
135    ) -> ::std::result::Result<
136        crate::operation::put_bucket_lifecycle_configuration::PutBucketLifecycleConfigurationOutput,
137        ::aws_smithy_runtime_api::client::result::SdkError<
138            crate::operation::put_bucket_lifecycle_configuration::PutBucketLifecycleConfigurationError,
139            ::aws_smithy_runtime_api::client::orchestrator::HttpResponse,
140        >,
141    > {
142        let input = self
143            .inner
144            .build()
145            .map_err(::aws_smithy_runtime_api::client::result::SdkError::construction_failure)?;
146        let runtime_plugins = crate::operation::put_bucket_lifecycle_configuration::PutBucketLifecycleConfiguration::operation_runtime_plugins(
147            self.handle.runtime_plugins.clone(),
148            &self.handle.conf,
149            self.config_override,
150        );
151        crate::operation::put_bucket_lifecycle_configuration::PutBucketLifecycleConfiguration::orchestrate(&runtime_plugins, input).await
152    }
153
154    /// Consumes this builder, creating a customizable operation that can be modified before being sent.
155    pub fn customize(
156        self,
157    ) -> crate::client::customize::CustomizableOperation<
158        crate::operation::put_bucket_lifecycle_configuration::PutBucketLifecycleConfigurationOutput,
159        crate::operation::put_bucket_lifecycle_configuration::PutBucketLifecycleConfigurationError,
160        Self,
161    > {
162        crate::client::customize::CustomizableOperation::new(self)
163    }
164    pub(crate) fn config_override(mut self, config_override: impl ::std::convert::Into<crate::config::Builder>) -> Self {
165        self.set_config_override(::std::option::Option::Some(config_override.into()));
166        self
167    }
168
169    pub(crate) fn set_config_override(&mut self, config_override: ::std::option::Option<crate::config::Builder>) -> &mut Self {
170        self.config_override = config_override;
171        self
172    }
173    /// <p>The name of the bucket for which to set the configuration.</p>
174    pub fn bucket(mut self, input: impl ::std::convert::Into<::std::string::String>) -> Self {
175        self.inner = self.inner.bucket(input.into());
176        self
177    }
178    /// <p>The name of the bucket for which to set the configuration.</p>
179    pub fn set_bucket(mut self, input: ::std::option::Option<::std::string::String>) -> Self {
180        self.inner = self.inner.set_bucket(input);
181        self
182    }
183    /// <p>The name of the bucket for which to set the configuration.</p>
184    pub fn get_bucket(&self) -> &::std::option::Option<::std::string::String> {
185        self.inner.get_bucket()
186    }
187    /// <p>Indicates the algorithm used to create the checksum for the request when you use the SDK. This header will not provide any additional functionality if you don't use the SDK. When you send this header, there must be a corresponding <code>x-amz-checksum</code> or <code>x-amz-trailer</code> header sent. Otherwise, Amazon S3 fails the request with the HTTP status code <code>400 Bad Request</code>. For more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html">Checking object integrity</a> in the <i>Amazon S3 User Guide</i>.</p>
188    /// <p>If you provide an individual checksum, Amazon S3 ignores any provided <code>ChecksumAlgorithm</code> parameter.</p>
189    pub fn checksum_algorithm(mut self, input: crate::types::ChecksumAlgorithm) -> Self {
190        self.inner = self.inner.checksum_algorithm(input);
191        self
192    }
193    /// <p>Indicates the algorithm used to create the checksum for the request when you use the SDK. This header will not provide any additional functionality if you don't use the SDK. When you send this header, there must be a corresponding <code>x-amz-checksum</code> or <code>x-amz-trailer</code> header sent. Otherwise, Amazon S3 fails the request with the HTTP status code <code>400 Bad Request</code>. For more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html">Checking object integrity</a> in the <i>Amazon S3 User Guide</i>.</p>
194    /// <p>If you provide an individual checksum, Amazon S3 ignores any provided <code>ChecksumAlgorithm</code> parameter.</p>
195    pub fn set_checksum_algorithm(mut self, input: ::std::option::Option<crate::types::ChecksumAlgorithm>) -> Self {
196        self.inner = self.inner.set_checksum_algorithm(input);
197        self
198    }
199    /// <p>Indicates the algorithm used to create the checksum for the request when you use the SDK. This header will not provide any additional functionality if you don't use the SDK. When you send this header, there must be a corresponding <code>x-amz-checksum</code> or <code>x-amz-trailer</code> header sent. Otherwise, Amazon S3 fails the request with the HTTP status code <code>400 Bad Request</code>. For more information, see <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html">Checking object integrity</a> in the <i>Amazon S3 User Guide</i>.</p>
200    /// <p>If you provide an individual checksum, Amazon S3 ignores any provided <code>ChecksumAlgorithm</code> parameter.</p>
201    pub fn get_checksum_algorithm(&self) -> &::std::option::Option<crate::types::ChecksumAlgorithm> {
202        self.inner.get_checksum_algorithm()
203    }
204    /// <p>Container for lifecycle rules. You can add as many as 1,000 rules.</p>
205    pub fn lifecycle_configuration(mut self, input: crate::types::BucketLifecycleConfiguration) -> Self {
206        self.inner = self.inner.lifecycle_configuration(input);
207        self
208    }
209    /// <p>Container for lifecycle rules. You can add as many as 1,000 rules.</p>
210    pub fn set_lifecycle_configuration(mut self, input: ::std::option::Option<crate::types::BucketLifecycleConfiguration>) -> Self {
211        self.inner = self.inner.set_lifecycle_configuration(input);
212        self
213    }
214    /// <p>Container for lifecycle rules. You can add as many as 1,000 rules.</p>
215    pub fn get_lifecycle_configuration(&self) -> &::std::option::Option<crate::types::BucketLifecycleConfiguration> {
216        self.inner.get_lifecycle_configuration()
217    }
218    /// <p>The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code <code>403 Forbidden</code> (access denied).</p><note>
219    /// <p>This parameter applies to general purpose buckets only. It is not supported for directory bucket lifecycle configurations.</p>
220    /// </note>
221    pub fn expected_bucket_owner(mut self, input: impl ::std::convert::Into<::std::string::String>) -> Self {
222        self.inner = self.inner.expected_bucket_owner(input.into());
223        self
224    }
225    /// <p>The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code <code>403 Forbidden</code> (access denied).</p><note>
226    /// <p>This parameter applies to general purpose buckets only. It is not supported for directory bucket lifecycle configurations.</p>
227    /// </note>
228    pub fn set_expected_bucket_owner(mut self, input: ::std::option::Option<::std::string::String>) -> Self {
229        self.inner = self.inner.set_expected_bucket_owner(input);
230        self
231    }
232    /// <p>The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code <code>403 Forbidden</code> (access denied).</p><note>
233    /// <p>This parameter applies to general purpose buckets only. It is not supported for directory bucket lifecycle configurations.</p>
234    /// </note>
235    pub fn get_expected_bucket_owner(&self) -> &::std::option::Option<::std::string::String> {
236        self.inner.get_expected_bucket_owner()
237    }
238    /// <p>Indicates which default minimum object size behavior is applied to the lifecycle configuration.</p><note>
239    /// <p>This parameter applies to general purpose buckets only. It is not supported for directory bucket lifecycle configurations.</p>
240    /// </note>
241    /// <ul>
242    /// <li>
243    /// <p><code>all_storage_classes_128K</code> - Objects smaller than 128 KB will not transition to any storage class by default.</p></li>
244    /// <li>
245    /// <p><code>varies_by_storage_class</code> - Objects smaller than 128 KB will transition to Glacier Flexible Retrieval or Glacier Deep Archive storage classes. By default, all other storage classes will prevent transitions smaller than 128 KB.</p></li>
246    /// </ul>
247    /// <p>To customize the minimum object size for any transition you can add a filter that specifies a custom <code>ObjectSizeGreaterThan</code> or <code>ObjectSizeLessThan</code> in the body of your transition rule. Custom filters always take precedence over the default transition behavior.</p>
248    pub fn transition_default_minimum_object_size(mut self, input: crate::types::TransitionDefaultMinimumObjectSize) -> Self {
249        self.inner = self.inner.transition_default_minimum_object_size(input);
250        self
251    }
252    /// <p>Indicates which default minimum object size behavior is applied to the lifecycle configuration.</p><note>
253    /// <p>This parameter applies to general purpose buckets only. It is not supported for directory bucket lifecycle configurations.</p>
254    /// </note>
255    /// <ul>
256    /// <li>
257    /// <p><code>all_storage_classes_128K</code> - Objects smaller than 128 KB will not transition to any storage class by default.</p></li>
258    /// <li>
259    /// <p><code>varies_by_storage_class</code> - Objects smaller than 128 KB will transition to Glacier Flexible Retrieval or Glacier Deep Archive storage classes. By default, all other storage classes will prevent transitions smaller than 128 KB.</p></li>
260    /// </ul>
261    /// <p>To customize the minimum object size for any transition you can add a filter that specifies a custom <code>ObjectSizeGreaterThan</code> or <code>ObjectSizeLessThan</code> in the body of your transition rule. Custom filters always take precedence over the default transition behavior.</p>
262    pub fn set_transition_default_minimum_object_size(
263        mut self,
264        input: ::std::option::Option<crate::types::TransitionDefaultMinimumObjectSize>,
265    ) -> Self {
266        self.inner = self.inner.set_transition_default_minimum_object_size(input);
267        self
268    }
269    /// <p>Indicates which default minimum object size behavior is applied to the lifecycle configuration.</p><note>
270    /// <p>This parameter applies to general purpose buckets only. It is not supported for directory bucket lifecycle configurations.</p>
271    /// </note>
272    /// <ul>
273    /// <li>
274    /// <p><code>all_storage_classes_128K</code> - Objects smaller than 128 KB will not transition to any storage class by default.</p></li>
275    /// <li>
276    /// <p><code>varies_by_storage_class</code> - Objects smaller than 128 KB will transition to Glacier Flexible Retrieval or Glacier Deep Archive storage classes. By default, all other storage classes will prevent transitions smaller than 128 KB.</p></li>
277    /// </ul>
278    /// <p>To customize the minimum object size for any transition you can add a filter that specifies a custom <code>ObjectSizeGreaterThan</code> or <code>ObjectSizeLessThan</code> in the body of your transition rule. Custom filters always take precedence over the default transition behavior.</p>
279    pub fn get_transition_default_minimum_object_size(&self) -> &::std::option::Option<crate::types::TransitionDefaultMinimumObjectSize> {
280        self.inner.get_transition_default_minimum_object_size()
281    }
282}