• +44(0)7855748256
  • bolaogun9@gmail.com
  • London

Your Locked-Down Corporate Laptop Just Became Irrelevant: An AWS CloudShell Field Guide for DevSecOps Engineers

Let me paint you a picture that will be painfully familiar.

It’s 8:47 AM. You’ve just inherited a brand-new corporate Windows laptop, still warm from the imaging oven. You open a terminal and try to install Terraform. Blocked. You try WSL. Blocked. Docker Desktop? Blocked. You consider asking the service desk for admin rights and then remember you’ve been down that road before: three approval layers, a risk assessment, and six weeks of waiting, only to be told that binary installs constitute a “security risk.”

Meanwhile, production is down and your platform team needs you to run a terraform plan right now.

I’ve been there. Twenty-five years in this industry, and if there’s one constant across every enterprise engagement, it’s the gap between what the security team will permit on a laptop and what a DevSecOps engineer actually needs to do their job.

Here’s what most engineers in that position don’t know: AWS CloudShell has already solved this. And as of 2024–2025, it has grown from a convenient curiosity into a genuinely viable daily driver for platform engineers working against locked-down corporate environments.

This is the field guide I wish I’d had. No fluff, no hand-waving. Just what CloudShell actually is, what it can do, how to get the most out of it, and where it will bite you if you’re not prepared.


What AWS CloudShell Actually Is

Strip away the marketing and CloudShell is this: a free, browser-launched Amazon Linux 2023 container that inherits your AWS console IAM identity, runs inside AWS’s network, and persists one gigabyte of your home directory across sessions.

That single sentence contains four things that matter enormously when you’re working from a restricted endpoint:

It’s browser-launched. The only thing your corporate laptop does is render a WebSocket connection to ssmmessages.<region>.amazonaws.com, the same endpoint the Systems Manager plugin uses. If your corporate proxy allows the AWS console (and it almost certainly does), it allows CloudShell. Your IT team’s binary controls, Windows Defender policies, and USB restrictions are all entirely irrelevant. The toolchain runs inside AWS, not on your machine.

It inherits your IAM identity. The moment you open CloudShell, you have your console credentials. No aws configure, no access keys stored anywhere, no credential file on disk. You are authenticated to AWS with whatever permissions your IAM principal carries, and those credentials refresh automatically throughout your session.

It’s free. CloudShell itself costs nothing. You pay only for the downstream AWS resources you touch (the S3 bucket, the EKS cluster, the RDS instance), not for the shell environment itself.

One gigabyte of home directory persists. This is the crux of everything that follows. $HOME (/home/cloudshell-user) survives across sessions, across browser closes, across idle timeouts. Your binaries, your dotfiles, your scripts, your git configuration: all of it lives there and comes back on your next session. Everything outside $HOME (the operating system, /tmp, packages you install with dnf) is ephemeral and gets wiped the moment the session ends. Once you understand that distinction, everything about working effectively in CloudShell follows from it.


The Compute Reality

CloudShell gives you 1 vCPU, 2 GiB of RAM, running Amazon Linux 2023, with Docker pre-installed and working without sudo. Three shells are available: bash (default), zsh, and PowerShell on .NET Core. You have full sudo access, though its value is limited since anything you install with dnf disappears when the session ends.

The session limits are the numbers you need to commit to memory:

  • 20–30 minutes of idle timeout: if there’s no keyboard or pointer activity, the session ends and any running processes are killed. Not paused. Killed.
  • ~12 hour hard cap: even an actively-used session has a maximum lifetime of approximately 12 hours.
  • Up to 10 concurrent tabs per region, all sharing the same VM and the same $HOME.
  • One independent VM per region: switching regions in the console spins up a separate environment with its own separate home directory.
  • 120-day deletion timer: if you stop using a particular region’s CloudShell entirely, AWS deletes that region’s home directory 120 days after your last session. You’ll get a Health Dashboard notice first, and the timer resets the moment you log back in.

The session limits are not insurmountable (I’ll cover tmux and CodeBuild patterns for working around them), but they represent the single most important design constraint for anyone using CloudShell as a daily driver. If a task might run longer than 12 hours unattended, CloudShell is the wrong place for it.


What Ships Out of the Box

The pre-installed toolchain is more comprehensive than most engineers expect. AWS keeps the image on Amazon Linux 2023 and updates it continuously, so version numbers drift, but the following are consistently present:

AWS tooling is the strongest category: AWS CLI v2, AWS SAM CLI, AWS CDK Toolkit, Elastic Beanstalk CLI, Amazon ECS CLI, AWS Tools for PowerShell, boto3, and, critically, the SSM Session Manager plugin. That last one matters more than most people realise; I’ll come back to it.

Kubernetes gives you kubectl pre-installed. Nothing else. helm, eksctl, k9s, kubectx: all absent, all needing manual installation into ~/bin.

Runtimes include Python 3 with pip, Node.js with npm, and Amazon Corretto 21 (OpenJDK). Go, Ruby, Rust, and PHP are not pre-installed.

Shell utilities include git, vim, nano, tmux, jq, make, tar, zip, wget, curl, ssh client, psql (PostgreSQL client), bash-completion, procps, and man pages.

Docker is now fully pre-installed with a working daemon, no sudo required. This changed in January 2024, when AWS added Docker support across 13 regions, and expanded it to all commercial regions by September 2024. CDK image assets, container builds, and ECR pushes now work natively in CloudShell.

The most conspicuous absence on that list is Terraform. HashiCorp’s licensing change means Terraform isn’t in the Amazon Linux repos by default. Every CloudShell user needs to install it manually, which brings us to the most important operational pattern.


The Only Rule That Matters: Everything Lives in $HOME

If you take nothing else from this guide, take this: only $HOME survives across sessions. Everything you want to persist must live under ~/bin, ~/.local/bin, or ~/.npm-global/bin. The operating system is ephemeral. /usr/bin is ephemeral. Anything you sudo dnf install is gone the next time you open a tab.

Amazon Linux 2023’s default .bash_profile already adds ~/bin and ~/.local/bin to PATH, so dropping a binary into ~/bin and making it executable is the entire install pattern:

bash

# Install Terraform: resolve latest version via HashiCorp's checkpoint API
TF_VERSION=$(curl -fsSL https://checkpoint-api.hashicorp.com/v1/check/terraform \
  | python3 -c "import sys,json; print(json.load(sys.stdin)['current_version'])")

curl -fsSL "https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip" \
  -o /tmp/terraform.zip

unzip -oq /tmp/terraform.zip -d ~/bin/
chmod +x ~/bin/terraform

# Strip debug symbols: Go binaries carry them by default,
# and stripping saves 30-50% on disk (Terraform goes from ~110 MB to ~65 MB)
strip ~/bin/terraform

terraform version

The same pattern applies to Terragrunt, helm, eksctl, k9s, kubectx, kubens, sops, tflint, yq, and infracost. One thing to be aware of: some tools ship as tarballs rather than raw binaries. Infracost is the most common gotcha. infracost-linux-amd64.tar.gz contains a binary named infracost-linux-amd64 inside it, so you need to extract to /tmp first:

bash

curl -fsSL \
  "https://github.com/infracost/infracost/releases/download/v0.10.44/infracost-linux-amd64.tar.gz" \
  | tar -xz -C /tmp/
mv /tmp/infracost-linux-amd64 ~/bin/infracost
chmod +x ~/bin/infracost && strip ~/bin/infracost

Python tools persist cleanly via pip3 install --user --no-cache-dir, which installs into ~/.local/bin, a path that survives. The --no-cache-dir flag is non-negotiable: without it, pip writes build artifacts to /tmp on the overlay rootfs, not $HOME, and that filesystem has limits of its own that have nothing to do with your home directory quota.

Node.js tools need a one-time prefix redirect to $HOME before they’ll persist:

bash

npm config set prefix ~/.npm-global
export PATH="$HOME/.npm-global/bin:$PATH"
# Add the export to ~/.bashrc so it survives
npm install -g cdktf-cli

The bottom line is that managing a CloudShell home directory is a discipline, not an afterthought. I’ve published a bootstrap.sh script that handles all of this idempotently: it pins versions, strips binaries, checks available space before and after each install, and configures the shell environment. Link at the end.


Networking: What Goes Where

Default CloudShell sits in an AWS-managed network with full outbound internet and zero inbound access. No public IP, no inbound ports, no way to SSH into the environment from outside. Traffic from your browser is a WebSocket to the SSM Messages endpoint. That’s it. From inside CloudShell, you can reach the public internet freely and all public AWS endpoints directly.

What default CloudShell cannot reach is anything inside your VPC: private RDS endpoints, private EKS API servers, internal ALBs, anything with a 10.x or 172.x address. Two patterns solve this.

Pattern 1: SSM Session Manager Port Forwarding

This is what I reach for in 90% of situations where I need private resource access. Any EC2 instance running the SSM agent with the AmazonSSMManagedInstanceCore policy becomes a pivot point for tunnelling traffic to any endpoint that EC2 instance can reach, without SSH, without a key pair, without a security group rule allowing inbound from anywhere.

From CloudShell:

bash

# Forward local port 15432 to a private RDS instance via an EC2 pivot
aws ssm start-session \
  --target i-0abc123def456 \
  --document-name AWS-StartPortForwardingSessionToRemoteHost \
  --parameters '{
    "host":["prod-db.cluster-xxxxx.eu-west-2.rds.amazonaws.com"],
    "portNumber":["5432"],
    "localPortNumber":["15432"]
  }'

Open a second tab in CloudShell and connect:

bash

psql "host=localhost port=15432 dbname=mydb user=appuser"

Combined with RDS IAM authentication, you eliminate the password entirely:

bash

# Generate a short-lived auth token from your IAM identity
TOKEN=$(aws rds generate-db-auth-token \
  --hostname prod-db.cluster-xxxxx.eu-west-2.rds.amazonaws.com \
  --port 5432 \
  --username appuser \
  --region eu-west-2)

PGPASSWORD="$TOKEN" psql \
  "host=localhost port=15432 dbname=mydb user=appuser sslmode=verify-full"

IAM identity for authentication, SSM for the tunnel, no credentials stored anywhere: this is the production pattern I’d want in any enterprise environment. It also eliminates the bastion host entirely. No bastion fleet to patch, no SSH keys to rotate, no security group rules exposing port 22 to a CIDR range that’s gradually drifted to 0.0.0.0/0.

Pattern 2: VPC-Mode CloudShell

Launched in June 2024 and now available in all commercial regions, VPC-mode CloudShell places the shell directly inside your VPC. Your session’s network interface has an IP address in your chosen subnet and is governed by security groups you control. Private endpoints are natively reachable without any pivot host.

The tradeoffs are significant and worth understanding before you reach for it:

  • No persistent storage. VPC-mode environments have no persistent $HOME. The home directory is destroyed when the session ends. Every tool you installed is gone.
  • No file upload or download. The Actions menu’s upload/download functionality is disabled in VPC mode.
  • No internet unless you route it. Placing CloudShell in a public subnet does not give it internet access; VPC-mode environments are never assigned a public IP. You need a private subnet routed through a NAT Gateway.
  • Maximum 2 VPC environments per IAM principal.

My practical recommendation: use SSM port forwarding from the default shell for most private resource access. It requires one EC2 instance running SSM agent (which you almost certainly have already) and avoids the loss of persistent storage. Reserve VPC-mode for workflows where you genuinely need to be in the VPC, for example, accessing services that don’t expose a reachable endpoint outside the VPC at all, or operating tools that make many rapid connections to private addresses where the overhead of tunnelling each one would be prohibitive.


Working Terraform From CloudShell

With Terraform installed into ~/bin and your state backend in S3 with a DynamoDB lock table, a Terraform workflow from CloudShell is functionally identical to working from a local machine, with one significant advantage: authentication is automatic. There’s no aws configure, no profile switching, no AWS_PROFILE export. You open CloudShell, and Terraform’s AWS provider picks up your inherited credentials immediately.

bash

# Clone your repo (HTTPS + PAT, or CodeCommit with the pre-installed git-remote-codecommit)
git clone https://github.com/your-org/infrastructure.git
cd infrastructure/environments/production

# Init against your remote backend (credentials auto-inherited)
terraform init

# Plan and apply
terraform plan -out=tfplan
terraform apply tfplan

One gotcha to be aware of: if your Terraform configuration uses assume_role in the provider block, the assumed role session defaults to a 1-hour maximum. For long applies, set an explicit duration:

hcl

provider "aws" {
  assume_role {
    role_arn = "arn:aws:iam::123456789012:role/InfraDeployRole"
    duration = "4h"  # requires MaxSessionDuration ≥ 14400 on the role
  }
}

For applies you expect to exceed the 12-hour hard session limit (large RDS migrations, cross-account blue/green cutovers): don’t run them in CloudShell at all. Trigger them from CodeBuild and monitor from CloudShell:

bash

aws codebuild start-build --project-name terraform-apply
aws logs tail /aws/codebuild/terraform-apply --follow --format short

Terragrunt works identically, and the pre-installed git-remote-codecommit helper means CodeCommit repositories work with terragrunt run-all without any additional credential configuration.


EKS and Kubernetes Workflows

kubectl is pre-installed, and configuring it for an EKS cluster is a single command:

bash

aws eks update-kubeconfig --name my-cluster --region eu-west-2

From that point, every kubectl command works with your IAM identity. The prerequisite is that your IAM principal is mapped to cluster RBAC permissions. The modern way to do this is via EKS Access Entries (GA 2023), which replaces the older aws-auth ConfigMap pattern:

bash

# Grant your IAM role cluster-admin access via EKS Access Entries
aws eks create-access-entry \
  --cluster-name my-cluster \
  --principal-arn arn:aws:iam::123456789012:role/PlatformEngineer \
  --type STANDARD

aws eks associate-access-policy \
  --cluster-name my-cluster \
  --principal-arn arn:aws:iam::123456789012:role/PlatformEngineer \
  --access-scope type=cluster \
  --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy

If your EKS API endpoint is private-only (as it should be in any serious production environment), you need VPC-mode CloudShell in the cluster’s VPC, or SSM port forwarding through an EC2 instance that can reach the API endpoint.

For interactive cluster work, install k9s into ~/bin; it runs cleanly in CloudShell’s terminal and gives you the full TUI cluster browser. For scripted bulk operations, the pre-installed AWS CLI’s eks subcommands cover cluster management, and kubectl handles everything at the workload layer.


CI/CD Debugging and Observability

This is one of the areas where CloudShell genuinely exceeds what a local machine can offer, because the credentials are already in place and the AWS CLI tooling for log access is right there.

For Lambda function debugging:

bash

# Tail Lambda logs in real time
aws logs tail /aws/lambda/my-function --follow --filter-pattern ERROR

# Invoke and see the response immediately
aws lambda invoke \
  --function-name my-function \
  --payload '{"key":"value"}' \
  --cli-binary-format raw-in-base64-out \
  /dev/stdout

For CodeBuild pipeline failures:

bash

# Get the last failed build's log
aws logs tail /aws/codebuild/my-project --follow --since 30m --format short

# Start an interactive debugging session inside a running CodeBuild build
# (CodeBuild must have the breakpoint enabled in its buildspec)
aws codebuild list-builds-for-project --project-name my-project \
  --query 'ids[0]' --output text | xargs -I{} \
  aws ssm start-session --target {}

For CodePipeline state:

bash

aws codepipeline get-pipeline-state --name my-pipeline \
  --query 'stageStates[*].{Stage:stageName,Status:latestExecution.status}' \
  --output table

AWS also ships a CloudWatch Live Tail feature that provides real-time streaming log access with JSON-aware filtering. The command aws logs start-live-tail --log-group-identifiers arn:aws:logs:... runs natively in CloudShell.


Secrets and Configuration Management

CloudShell’s IAM inheritance makes it an ideal environment for secrets work, precisely because nothing is stored on the laptop.

For ad-hoc secret retrieval:

bash

# Retrieve and pretty-print a secret
aws secretsmanager get-secret-value \
  --secret-id prod/myapp/db \
  --query SecretString \
  --output text | jq

# Bulk-retrieve SSM Parameter Store hierarchy
aws ssm get-parameters-by-path \
  --path /prod/myapp/ \
  --recursive \
  --with-decryption \
  --query 'Parameters[*].{Name:Name,Value:Value}' \
  --output table

For encrypted file management with SOPS and AWS KMS, CloudShell is close to the ideal workflow. SOPS needs an AWS identity to call KMS for encryption and decryption. CloudShell has one, automatically, without a credential file anywhere on disk:

bash

# Install sops into ~/bin (persists)
SOPS_VERSION=$(curl -fsSL https://api.github.com/repos/getsops/sops/releases/latest \
  | grep '"tag_name"' | sed -E 's/.*"([^"]+)".*/\1/')
curl -fsSL \
  "https://github.com/getsops/sops/releases/download/${SOPS_VERSION}/sops-${SOPS_VERSION}.linux.amd64" \
  -o ~/bin/sops && chmod +x ~/bin/sops

# Edit an encrypted file: sops decrypts with KMS, opens vim, re-encrypts on save
sops secrets/prod.yaml

# Use with Terraform: decrypt inline, never write plaintext to disk
sops -d secrets/prod.tfvars.enc.yaml | terraform apply -var-file=/dev/stdin

SOPS with KMS backend, operated from a CloudShell session: this gives you encrypted-at-rest secrets, a KMS audit trail for every decryption event, and zero credential files on any corporate endpoint. It’s how I’d want secrets managed in any regulated environment.


The Locked-Down Laptop Replacement Map

Let me make the replacement argument explicit. Here is the one-to-one substitution table for engineers working from restricted corporate endpoints:

Blocked on laptopCloudShell replacement
Terraform binarycurl binary into ~/bin; persists across sessions
kubectlPre-installed; aws eks update-kubeconfig for cluster access
SSH clientaws ssm start-session --target i-xxx; no keys, no inbound rules
Docker DesktopDocker daemon pre-installed; heavy builds via CodeBuild
Git CLIPre-installed; HTTPS + PAT for GitHub, or CodeCommit native
Python / pipPython 3 pre-installed; pip install --user persists to ~/.local
AWS CLIAWS CLI v2 pre-installed and auto-updated
Local credential storeNot needed; IAM identity inherited from console login
Text editorvim and nano pre-installed; or Actions → Edit file
Port forwardingaws ssm start-session with AWS-StartPortForwardingSessionToRemoteHost

The mental model shift required is this: the laptop is no longer the workstation. The browser is the terminal. The workstation is inside AWS. Once you internalise that, the locked-down laptop stops being a constraint and starts being irrelevant.


CloudShell vs Cloud9 vs EC2 Dev Box

This comparison has simplified significantly since mid-2024. AWS Cloud9 closed to new customers on 25 July 2024. Existing customers still have it, but it’s in maintenance mode and AWS explicitly redirects to CloudShell and the VS Code/JetBrains AWS Toolkits. Amazon CodeCatalyst closed to new customers in November 2025, removing its Dev Environments from consideration for new teams. The viable options are:

CloudShellEC2 dev boxGitHub Codespaces
CostFreeEC2 hourly + storage$0.18+/core-hour
Compute1 vCPU / 2 GiB (fixed)Any instance type2–16 cores
Persistence1 GB $HOME, ephemeral computeFull VMFull container
Session limit20–30 min idle, 12 h maxNone30 min default
DockerPre-installed, cache not persistentFull controlFull devcontainer
IDETerminal + basic web editorBYOVS Code Web
IAM integrationAutomatic (inherited)Instance profileManual OIDC or keys
VPC accessSSM tunnel or VPC-modeNativeNot in AWS network
Inbound SSHNot possibleYesVia gh CLI

The decision is straightforward in practice:

Use CloudShell for interactive ops, Terraform runs, kubectl, SSM tunnelling, secret retrieval, pipeline debugging, and any workflow where “I need a terminal with production credentials right now” is the requirement.

Use an EC2 dev box for anything requiring more than 12 hours uninterrupted, sustained GPU compute, always-on daemons, or serious IDE-based development on large codebases.

Use Codespaces when the primary work is editing code in a full IDE and your corporate network permits it. Use CloudShell alongside it for anything requiring AWS IAM authentication.


The Hard Limits: What Will Bite You

I want to be direct about the constraints, because I’ve seen engineers commit to CloudShell as a daily driver and then get caught by these.

The idle timeout is merciless. Twenty to thirty minutes of no keyboard or pointer activity and your session ends. Running processes don’t count as activity. If you paste a terraform apply into the terminal, walk away, grab a coffee, and spend five minutes in Slack, the apply might be killed before you get back. The mitigation is tmux: run long operations inside a tmux session, and your browser activity keeps the session alive. But tmux does not override the timeout if you’re genuinely idle; it only protects against accidental tab closes.

The 12-hour hard cap is real. Even with constant activity. Plan accordingly: don’t start a major infrastructure migration at 9 AM and expect to finish at 10 PM in the same session.

Docker’s image cache is not in $HOME. It lives on the ephemeral rootfs. Build a large image, let the session expire, and the next session starts with an empty Docker cache. For serious image work, use CodeBuild.

The 1 GB home directory fills up faster than you’d expect. Go binaries carry debug symbols that inflate their size substantially: Terraform is ~110 MB unstripped, ~65 MB stripped. With a full toolchain installed (Terraform, Terragrunt, helm, kubectl, sops, tflint, eksctl, k9s, infracost, yq, plus pip packages and Terraform provider caches), you can realistically consume 700–800 MB. Track your usage with df -kh /home/cloudshell-user and strip your binaries.

No persistent background processes. CloudShell cannot run cron jobs, systemd services, webhook listeners, or anything that needs to persist between sessions. These belong in Lambda, CodeBuild, ECS Tasks, or Step Functions.

No way to provision a CloudShell environment programmatically. There is no Terraform resource, no CDK construct, no AWS CLI command to create a CloudShell environment. It’s console-launched only. This matters if you’re building onboarding automation for a team.


The Security Picture (That Your SOC Team Needs to Know)

CloudShell’s security posture has some characteristics that deserve explicit attention, particularly if you’re implementing it in a regulated environment.

It inherits full IAM permissions. There is no permission boundary applied to CloudShell by default. Whatever your IAM principal can do in the AWS console, it can do in CloudShell. The AWS-managed policies AWSManagementConsoleBasicUserAccess and AWSManagementConsoleAdministratorAccess both include full cloudshell:* permissions as part of general console access, so engineers may have CloudShell access without it being an explicit decision.

Shell commands are not CloudTrail events. This is the most important security gap to understand. Every AWS API call made from within CloudShell (aws s3 ls, terraform apply, aws eks update-kubeconfig) appears in CloudTrail as a normal API event with the CloudShell user agent. But the shell commands themselves (cat, curl, grep, python3 my-exfil-script.py) are invisible to CloudTrail entirely. An engineer (or an attacker who has compromised a console session) can execute arbitrary commands in CloudShell and the only forensic trail is the AWS API calls those commands happen to trigger.

For detection engineering, the actionable pattern is to alert on sensitive AWS API calls where the userAgent field in CloudTrail contains "CloudShell". Calls to secretsmanager:GetSecretValue, ssm:GetParameter, sts:AssumeRole, s3:GetObject on sensitive buckets: all of these, when originating from a CloudShell session, are worth reviewing. LUCR-3 (the threat actor also known as Scattered Spider) has used CloudShell specifically for hands-on-keyboard API activity in compromised AWS environments. This is not a theoretical risk.

File upload and download are data movement controls. The Actions menu’s upload and download features use presigned S3 URLs under the hood. From a DLP perspective, these represent paths for data exfiltration that bypass traditional endpoint controls. Organisations that need to restrict data movement should deny cloudshell:GetFileUploadUrls and cloudshell:GetFileDownloadUrls at the SCP level for engineers who don’t require them.

To lock down CloudShell access at the SCP level without removing it entirely:

json

{
  "Sid": "RestrictCloudShellDataMovement",
  "Effect": "Deny",
  "Action": [
    "cloudshell:GetFileUploadUrls",
    "cloudshell:GetFileDownloadUrls"
  ],
  "Resource": "*"
}

The home directory is encrypted at rest with KMS and stays in-region. For GDPR and data residency purposes, this is straightforward: data doesn’t leave the region. The 120-day retention policy means sensitive material left in $HOME will eventually be deleted if you stop using that region’s CloudShell, but don’t rely on that as a data management strategy.


Power Patterns for Daily Use

Six patterns separate engineers who use CloudShell occasionally from those who use it as a primary workstation.

1. A ~/bootstrap.sh that is idempotent. Every tool install, every alias, every shell configuration: all of it in a single script that skips already-installed components and fails cleanly with a clear error if space is tight. Run it at the start of every session (silently, via ~/.bashrc) and you have a consistent environment regardless of which tab or region you open. I’ve published the full script (link at the end).

2. Dotfiles in S3. Your .bashrc, .gitconfig, script library, and working notes all backed up to a private S3 bucket at the end of each session, restored at the start of the next:

bash

# Back up at session end
aws s3 sync ~ s3://my-tooling/cloudshell-dotfiles \
  --exclude "bin/*" --exclude ".local/lib/*" --quiet

# Restore at session start (integrated into bootstrap.sh)
aws s3 sync s3://my-tooling/cloudshell-dotfiles ~ --quiet

3. tmux always. Start every working session inside tmux. It protects against tab closes and browser refreshes, lets you split panes (one for the terraform apply, one for watching logs), and gives you named sessions you can reattach to:

bash

tmux new -s infra   # named session
# Ctrl-b |          # split horizontally
# Ctrl-b d          # detach (session keeps running)
tmux attach -t infra  # reattach from any tab

4. AWS_PAGER="" in your environment. By default, AWS CLI output that exceeds the terminal height opens in less and waits for you to press q before returning to the prompt. In scripts and pipelines, this is catastrophic; the script hangs waiting for input that never comes. Add export AWS_PAGER="" to your .bashrc and this goes away permanently.

5. aws configure export-credentials for credential chaining. Added in AWS CLI v2 in May 2023, this command exports your current CloudShell credentials as environment variable export statements, useful when you need to pass credentials into a tool that reads them from the environment rather than the credential chain:

bash

# Export as environment variables for the current shell
eval "$(aws configure export-credentials --format env)"

# Export as PowerShell variables
aws configure export-credentials --format powershell

6. CodeBuild as a remote build farm. Anything that doesn’t fit in CloudShell’s constraints (applies over 12 hours, Docker builds over 1 GB, anything requiring GPU or more than 2 GiB RAM) belongs in CodeBuild, triggered from CloudShell and monitored from CloudShell:

bash

# Kick off a build
BUILD_ID=$(aws codebuild start-build \
  --project-name my-heavy-build \
  --query 'build.id' --output text)

# Stream the logs
aws logs tail /aws/codebuild/my-heavy-build \
  --follow --format short --since 1m

CloudShell as the launchpad, CodeBuild as the engine: this is the mature answer to CloudShell’s time and resource limits.


My Verdict

AWS CloudShell in 2025 is not a toy and it’s not a workaround. For DevSecOps engineers doing platform work (infrastructure management, cluster operations, secrets workflows, pipeline debugging), it is a genuinely capable working environment that happens to run in a browser.

The constraints are real: 1 GB home directory, 20–30 minute idle timeout, 12-hour hard cap, no persistent background processes. None of them are disqualifying for the workflows where CloudShell excels. For the workflows where it doesn’t (sustained development on large codebases, long-running unattended processes, GPU work), reach for an EC2 dev box or CodeBuild.

For the engineer on the locked-down corporate laptop who needs to run a terraform plan right now and can’t install a binary to save their life: CloudShell is the answer. Open the AWS console. Click the shell icon. Everything you need is already there, authenticated with your IAM identity, and running inside AWS where your IT department can’t reach it.

The only install required is in your muscle memory.


Resources

  • bootstrap.sh: the idempotent toolchain installer referenced throughout this post, with README, available on my GitHub [here]
  • AWS CloudShell documentation: docs.aws.amazon.com/cloudshell
  • CloudShell VPC environments: docs.aws.amazon.com/cloudshell/latest/userguide/using-cshell-in-vpc.html
  • SSM Session Manager port forwarding: docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-sessions-start.html

Got questions or a pattern I haven’t covered? Drop them in the comments below, or reach me directly at consulting.ogunlana.net.


Bola Ogunlana is a Senior DevSecOps Engineer and AWS-certified cloud architect with 25+ years of enterprise delivery experience across a myraid of sectors. He runs a cloud consulting practice at consulting.ogunlana.net specialising in AWS/Azure cost optimisation and platform engineering.

Leave a Reply

Your email address will not be published. Required fields are marked *