Go by Example: Upload File to S3
Upload files to Amazon S3 efficiently using the AWS SDK for Go v2. This example demonstrates using the `s3/manager` package to handle multipart uploads automatically for large files.
Code
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
func main() {
// Load AWS config
cfg, err := config.LoadDefaultConfig(context.TODO(), config.WithRegion("us-east-1"))
if err != nil {
log.Fatalf("unable to load SDK config, %v", err)
}
// Create an S3 client
client := s3.NewFromConfig(cfg)
// Create an uploader with the client and default options
uploader := manager.NewUploader(client)
// Open the file to upload
file, err := os.Open("test.txt")
if err != nil {
log.Fatalf("failed to open file, %v", err)
}
defer file.Close()
// Upload the file
result, err := uploader.Upload(context.TODO(), &s3.PutObjectInput{
Bucket: aws.String("my-bucket"),
Key: aws.String("uploads/test.txt"),
Body: file,
})
if err != nil {
log.Fatalf("failed to upload file, %v", err)
}
fmt.Printf("File uploaded to %s\n", result.Location)
}Explanation
Uploading files to Amazon S3 is a fundamental operation for many cloud-native applications. The AWS SDK for Go v2 provides the feature/s3/manager package, which includes a high-level Uploader utility. This utility is designed to simplify the upload process by automatically splitting large files into smaller parts and uploading them in parallel, a process known as multipart upload.
Using the Uploader is generally preferred over the low-level PutObject API for most use cases. It handles the complexities of determining whether to use a single request or a multipart upload based on the file size. This abstraction ensures that your application remains efficient and reliable, regardless of whether you are uploading a small configuration file or a multi-gigabyte video.
To use the uploader, you first initialize a standard S3 client and then pass it to the manager.NewUploader function. The upload method accepts an s3.PutObjectInput struct, where you specify the bucket name, object key (path), and the data body (typically an io.Reader like an open file). The uploader then manages the transfer and returns the location of the uploaded object upon success.
- Automatic Multipart Uploads: Large files are automatically split and uploaded concurrently, improving throughput.
- Retry Logic: The SDK handles transient network errors automatically, ensuring robust transfers.
- Memory Efficiency: By streaming data from an
io.Reader, you avoid loading the entire file into memory.

