Fix: Unable to Locate Credentials (AWS CLI / SDK)
The Error
You run an AWS CLI command and get:
Unable to locate credentials. You can configure credentials by running "aws configure".Or in your application using an AWS SDK:
NoCredentialProviders: no valid providers in chainYou may also see:
The security token included in the request is expiredAn error occurred (ExpiredTokenException) when calling the GetCallerIdentity operation: The security token included in the request is expiredInvalidIdentityToken: Token is expiredAll of these mean the same fundamental thing: the AWS SDK or CLI cannot find valid credentials to authenticate your request. Either no credentials exist, they’re misconfigured, or they’ve expired.
Why This Happens
The AWS CLI and SDKs look for credentials in a specific order called the credential provider chain. They check each source in sequence and use the first valid credentials they find. If none are found, you get “Unable to locate credentials.” If credentials are found but expired, you get the token expiry error.
The default provider chain order is:
- Environment variables (
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_SESSION_TOKEN) - Shared credentials file (
~/.aws/credentials) - Shared config file (
~/.aws/config, ifAWS_SDK_LOAD_CONFIGis set for some SDKs) - SSO token (from
aws sso login) - Container credentials (ECS task role via
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI) - Instance profile credentials (EC2 IAM role via the instance metadata service)
If you’re getting the error, none of these sources returned valid credentials. The most common causes:
- You never ran
aws configure. No credentials file exists. - Environment variables are not set in the current shell, container, or CI environment.
- SSO session expired. You logged in with
aws sso loginhours ago and the token expired. - Temporary STS credentials expired. MFA-based or assumed-role sessions have a limited lifespan (typically 1-12 hours).
- Wrong profile is active. Your credentials are under a named profile but you didn’t specify
--profileor setAWS_PROFILE. - IAM role not attached. Your EC2 instance, ECS task, or Lambda function doesn’t have an IAM role assigned.
- Docker container has no access to host credentials. Containers don’t inherit the host’s
~/.awsdirectory or environment variables by default.
Fix 1: Configure Credentials with aws configure
The fastest way to set up credentials on your local machine:
aws configureThis prompts for four values:
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-east-1
Default output format [None]: jsonThis creates two files:
~/.aws/credentials:
[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY~/.aws/config:
[default]
region = us-east-1
output = jsonVerify it worked:
aws sts get-caller-identityYou should see your account ID, user ARN, and user ID. If you still get an error, the credentials themselves may be invalid (deactivated keys, wrong keys, etc.).
Where to get access keys: Go to the AWS Console > IAM > Users > your user > Security credentials > Create access key. Use long-term access keys only for development. For production, use IAM roles.
Fix 2: Set Environment Variables
Environment variables override the credentials file and are the standard way to pass credentials in CI/CD pipelines, Docker containers, and scripts.
Linux / macOS:
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-east-1Windows (PowerShell):
$env:AWS_ACCESS_KEY_ID = "AKIAIOSFODNN7EXAMPLE"
$env:AWS_SECRET_ACCESS_KEY = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
$env:AWS_DEFAULT_REGION = "us-east-1"Windows (cmd):
set AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
set AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
set AWS_DEFAULT_REGION=us-east-1If you’re using temporary credentials (from STS, SSO, or MFA), you also need the session token:
export AWS_SESSION_TOKEN=FwoGZXIvYXdzEBYaDH...Without the session token, temporary credentials will fail with InvalidClientTokenId or The security token included in the request is invalid.
Verify:
aws sts get-caller-identityImportant: Environment variables are scoped to the current shell session. Opening a new terminal window means you need to set them again. For persistence, add them to your shell profile (~/.bashrc, ~/.zshrc) — but never commit credentials to version control.
Fix 3: Use Named Profiles
If you work with multiple AWS accounts, use named profiles instead of overwriting the default.
~/.aws/credentials:
[default]
aws_access_key_id = AKIA...DEFAULT
aws_secret_access_key = ...
[staging]
aws_access_key_id = AKIA...STAGING
aws_secret_access_key = ...
[production]
aws_access_key_id = AKIA...PROD
aws_secret_access_key = ...Use a profile with the --profile flag:
aws s3 ls --profile stagingOr set it as the active profile for the entire shell session:
export AWS_PROFILE=stagingA common mistake: you configured credentials under a named profile but forgot to specify it. The CLI tries the [default] profile, finds nothing, and throws “Unable to locate credentials.”
Check which profiles you have configured:
aws configure list-profilesCheck what the CLI is currently using:
aws configure listThis shows you exactly which credential source is active and whether values are coming from the config file, environment variables, or an IAM role.
Fix 4: Refresh Expired SSO or STS Credentials
AWS SSO (IAM Identity Center)
If your organization uses AWS SSO, you authenticate with aws sso login. These sessions expire (typically after 8-12 hours).
When the session expires, you get:
The SSO session associated with this profile has expired or is otherwise invalid.Or the generic “Unable to locate credentials” error.
Fix it by logging in again:
aws sso login --profile your-sso-profileIf you haven’t set up SSO yet:
aws configure ssoThis walks you through the setup and creates a profile in ~/.aws/config:
[profile your-sso-profile]
sso_start_url = https://your-org.awsapps.com/start
sso_region = us-east-1
sso_account_id = 123456789012
sso_role_name = AdministratorAccess
region = us-east-1STS temporary credentials (MFA / AssumeRole)
If you use MFA or cross-account role assumption, your credentials come from STS and have an expiration time.
Get fresh credentials with MFA:
aws sts get-session-token \
--serial-number arn:aws:iam::123456789012:mfa/your-user \
--token-code 123456This returns temporary credentials. Set them as environment variables:
export AWS_ACCESS_KEY_ID=ASIA...
export AWS_SECRET_ACCESS_KEY=...
export AWS_SESSION_TOKEN=...For role assumption:
aws sts assume-role \
--role-arn arn:aws:iam::987654321098:role/CrossAccountRole \
--role-session-name my-sessionAutomate this. Manually copying STS credentials is tedious and error-prone. Use a tool like aws-vault, granted, or configure source_profile and role_arn in ~/.aws/config:
[profile cross-account]
role_arn = arn:aws:iam::987654321098:role/CrossAccountRole
source_profile = default
mfa_serial = arn:aws:iam::123456789012:mfa/your-userNow aws s3 ls --profile cross-account automatically handles the role assumption and MFA prompt.
Fix 5: Attach an IAM Role (EC2, ECS, Lambda)
On AWS compute services, you should use IAM roles instead of hardcoded credentials. If no role is attached, the SDK can’t find credentials through the instance metadata service.
EC2
Check if an instance profile (IAM role) is attached:
curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/If this returns a 404 or empty response, no role is attached.
Attach one through the AWS Console: EC2 > Instances > select instance > Actions > Security > Modify IAM role. Or with the CLI:
aws ec2 associate-iam-instance-profile \
--instance-id i-1234567890abcdef0 \
--iam-instance-profile Name=MyInstanceProfileAfter attaching, the SDK automatically picks up credentials from the metadata service. No configuration needed.
IMDSv2 note: If your instance requires IMDSv2 (recommended), the metadata request needs a token:
TOKEN=$(curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
curl -s -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/iam/security-credentials/ECS
For ECS tasks, set the IAM role in your task definition under taskRoleArn:
{
"taskRoleArn": "arn:aws:iam::123456789012:role/ecsTaskRole",
"containerDefinitions": [...]
}ECS injects the credentials through the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable. The SDK reads it automatically.
If your application is getting “Unable to locate credentials” inside an ECS container, verify the task role is set:
aws ecs describe-task-definition --task-definition your-task-def | grep taskRoleArnA missing or null taskRoleArn is the problem. Don’t confuse it with executionRoleArn, which is used by the ECS agent to pull images and write logs — not by your application code.
Lambda
Lambda functions always have an execution role. If your function can’t find credentials, the issue is usually permissions on the role, not missing credentials. But verify the role exists and has the right policies:
aws lambda get-function-configuration --function-name my-function | grep RoleFix 6: Pass Credentials into Docker Containers
Docker containers are isolated from the host. They don’t have access to your ~/.aws directory or shell environment variables unless you explicitly pass them. For Docker socket permission issues, see Fix: Docker Permission Denied.
Option 1: Mount the credentials file:
docker run -v ~/.aws:/root/.aws:ro my-appThe :ro flag makes it read-only, preventing the container from modifying your credentials.
Option 2: Pass environment variables:
docker run \
-e AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_DEFAULT_REGION \
my-appWhen you use -e VAR_NAME without a value, Docker passes through the variable’s current value from the host shell.
Option 3: Docker Compose:
services:
app:
build: .
environment:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- AWS_DEFAULT_REGION
# Or mount the credentials file:
volumes:
- ~/.aws:/root/.aws:roOption 4: Use ECS task roles if running on ECS. This is the cleanest approach — no credential management needed.
CI/CD pipelines (GitHub Actions, GitLab CI, etc.): Set AWS credentials as secrets/environment variables in your CI platform. Never hardcode them in pipeline configuration files.
GitHub Actions example:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/GitHubActionsRole
aws-region: us-east-1Using OIDC federation with role-to-assume is preferred over storing access keys as GitHub secrets.
Fix 7: Use credential_process for Custom Credential Sources
If your organization uses a custom credential provider (a vault, a CLI tool, etc.), you can configure it in ~/.aws/config:
[profile custom]
credential_process = /usr/local/bin/my-credential-tool --format jsonThe command must output JSON in this format:
{
"Version": 1,
"AccessKeyId": "AKIA...",
"SecretAccessKey": "...",
"SessionToken": "...",
"Expiration": "2026-03-16T12:00:00Z"
}If the credential_process command fails or returns malformed JSON, the CLI falls through to the next provider in the chain — which may result in “Unable to locate credentials” rather than the actual error. Test the command directly to see if it works:
/usr/local/bin/my-credential-tool --format jsonEdge Cases
Credential provider chain short-circuits on invalid (not missing) credentials
The chain stops at the first source that provides credentials, even if those credentials are invalid or expired. If you have expired credentials in environment variables, the SDK won’t fall through to your ~/.aws/credentials file. It stops at the environment variables and fails.
This is a common source of confusion. You know your credentials file is correct, but the CLI keeps failing because stale environment variables are taking priority.
Check for leftover environment variables:
env | grep AWS_Unset them if they’re stale:
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_SESSION_TOKENAWS_PROFILE set to a nonexistent profile
If AWS_PROFILE is set to a profile name that doesn’t exist in your config files, you get “Unable to locate credentials” instead of a more helpful “profile not found” error.
echo $AWS_PROFILEIf it’s set to something unexpected, unset it or fix it:
unset AWS_PROFILE
# or
export AWS_PROFILE=correct-profile-nameCredentials file has wrong permissions
On Linux and macOS, the credentials file should only be readable by your user (see Fix: bash Permission Denied for more on file permissions):
ls -la ~/.aws/credentialsIf the permissions are too open (e.g., 644 or 777), some tools may refuse to read the file. Fix it:
chmod 600 ~/.aws/credentials
chmod 600 ~/.aws/configClock skew on EC2 or containers
AWS rejects requests when your system clock is more than 5 minutes out of sync. You’ll get:
Signature expired: is now earlier than / later thanCheck your system time:
date -uCompare it with actual UTC time. On EC2, ensure NTP is running:
sudo systemctl status chronyd # Amazon Linux 2
sudo systemctl status systemd-timesyncd # UbuntuThis is especially common in Docker containers that have been running for a long time or VMs that were suspended and resumed.
The ~/.aws/credentials file has syntax errors
A common typo is using spaces in the wrong places or missing the profile header:
# Wrong -- missing [default] header
aws_access_key_id = AKIA...
aws_secret_access_key = ...
# Wrong -- using "profile" keyword in credentials file
[profile staging]
aws_access_key_id = AKIA...
# Correct
[staging]
aws_access_key_id = AKIA...The [profile name] syntax is only used in ~/.aws/config. In ~/.aws/credentials, it’s just [name].
Still Not Working?
Enable debug logging
Run any AWS CLI command with --debug to see exactly where it’s looking for credentials and why each source fails:
aws sts get-caller-identity --debug 2>&1 | grep -i "credential\|provider\|looking"The full debug output is verbose, but look for lines mentioning credential providers. You’ll see each provider being tried and the specific reason it was skipped.
Check if your access keys are active
Access keys can be deactivated in IAM without being deleted. Check in the AWS Console under IAM > Users > Security credentials, or:
aws iam list-access-keys --user-name your-userLook at the Status field. If it says Inactive, activate it or create new keys.
Assume-role chain is too deep
AWS has a limit on chained role assumptions. If role A assumes role B which assumes role C, and the chain gets too long or circular, the credentials may fail silently. Simplify the trust chain.
EC2 metadata service blocked
If you’re on an EC2 instance but the metadata service is unreachable, credential retrieval fails. This can happen if:
- A firewall rule blocks
169.254.169.254 - The instance metadata service is disabled
- You’re in a Docker container on EC2 without the right network configuration
Test connectivity:
curl -s -m 2 http://169.254.169.254/latest/meta-data/For Docker containers on EC2 that need to access the instance metadata service, you need to configure the container’s network appropriately or use ECS task roles instead.
AWS SDK version is outdated
Older SDK versions may not support newer credential sources like SSO or OIDC. Update your SDK:
# AWS CLI
pip install --upgrade awscli
# or
brew upgrade awscli
# Node.js SDK
npm install @aws-sdk/client-sts@latest
# Python SDK
pip install --upgrade boto3AWS CLI v1 has limited SSO support. If you’re using SSO, upgrade to AWS CLI v2.
Related: If environment variables aren’t loading in your application, see Fix: process.env.VARIABLE_NAME Is Undefined. For Docker build issues with missing files, see Fix: Docker COPY Failed: File Not Found. For Kubernetes connection issues after credential changes, see Fix: kubectl Connection Refused.
Related Articles
Fix: Docker Volume Permission Denied – Cannot Write to Mounted Volume
How to fix Docker permission denied errors on mounted volumes caused by UID/GID mismatch, read-only mounts, or SELinux labels.
Fix: Docker Pull Error – Image Not Found or Manifest Unknown
How to fix Docker errors like 'manifest for image not found', 'repository does not exist', or 'pull access denied' when pulling or running images.
Fix: AWS S3 Access Denied (403 Forbidden) when uploading, downloading, or listing
How to fix the 'Access Denied' (403 Forbidden) error in AWS S3 when uploading, downloading, listing, or managing objects using the CLI, SDK, or console.
Fix: E: Unable to locate package (apt-get install on Ubuntu/Debian)
How to fix the 'E: Unable to locate package' error in apt-get on Ubuntu and Debian, including apt update, missing repos, Docker images, PPA issues, and EOL releases.