<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Security on Mark Wolfe&#39;s Blog</title>
    <link>https://www.wolfe.id.au/tags/security/</link>
    <description>Recent content in Security on Mark Wolfe&#39;s Blog</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <lastBuildDate>Tue, 02 Dec 2025 08:55:22 +1000</lastBuildDate><atom:link href="https://www.wolfe.id.au/tags/security/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Why Connect RPC is a great choice for building APIs</title>
      <link>https://www.wolfe.id.au/2025/12/02/why-connect-rpc-is-a-great-choice-for-building-apis/</link>
      <pubDate>Tue, 02 Dec 2025 08:55:22 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2025/12/02/why-connect-rpc-is-a-great-choice-for-building-apis/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://connectrpc.com/&#34;&gt;Connect RPC&lt;/a&gt; is a suite of libraries which enable you to build HTTP based APIs which are gRPC compatible. It provides a bridge between &lt;a href=&#34;https://grpc.io/&#34;&gt;gRPC&lt;/a&gt; and HTTP/1.1, letting you leverage HTTP/2&amp;rsquo;s multiplexing and performance benefits while still supporting HTTP/1.1 clients. This makes it a great solution for teams looking to get the performance benefits of gRPC, while maintaining broad client compatibility.&lt;/p&gt;
&lt;p&gt;HTTP/2&amp;rsquo;s multiplexing and binary framing make it significantly more efficient than HTTP/1.1, reducing latency and improving throughput. Connect RPC lets you harness these benefits while maintaining broad client compatibility for services that can&amp;rsquo;t yet support HTTP/2.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://connectrpc.com/">Connect RPC</a> is a suite of libraries which enable you to build HTTP based APIs which are gRPC compatible. It provides a bridge between <a href="https://grpc.io/">gRPC</a> and HTTP/1.1, letting you leverage HTTP/2&rsquo;s multiplexing and performance benefits while still supporting HTTP/1.1 clients. This makes it a great solution for teams looking to get the performance benefits of gRPC, while maintaining broad client compatibility.</p>
<p>HTTP/2&rsquo;s multiplexing and binary framing make it significantly more efficient than HTTP/1.1, reducing latency and improving throughput. Connect RPC lets you harness these benefits while maintaining broad client compatibility for services that can&rsquo;t yet support HTTP/2.</p>
<p>Connect RPC can be used to build both internal and external APIs, powering frontends, mobile apps, CLIs, agents and more. See the list of <a href="https://github.com/connectrpc">supported languages</a>.</p>
<h2 id="core-features">Core Features</h2>
<p>Connect RPC provides a number of features out of the box, such as:</p>
<ul>
<li><a href="https://connectrpc.com/docs/go/interceptors">Interceptors</a> which make it easy to extend Connect RPC and are used to add authentication, logging, metrics, tracing and retries.</li>
<li><a href="https://connectrpc.com/docs/go/serialization-and-compression">Serialization &amp; compression</a>, with pluggable serializers, and support for asymmetric compression reducing the amount of data that needs to be transmitted, or received.</li>
<li><a href="https://connectrpc.com/docs/go/errors">Error handling</a>, with a standard error format, with support for custom error codes to allow for more granular error handling.</li>
<li><a href="https://connectrpc.com/docs/go/observability">Observability</a>, with in built support for OpenTelemetry enabling you to easily add tracing, or metrics to your APIs.</li>
<li><a href="https://connectrpc.com/docs/go/streaming">Streaming</a>, which provides a very efficient way to push or pull data without polling.</li>
<li><a href="https://connectrpc.com/docs/protocol/#summary">Schemas</a>, which enable you to define and validate your API schemas, and generate code from them.</li>
<li><a href="https://connectrpc.com/docs/web/generating-code/#local-generation">Code generation</a> for <a href="https://go.dev">Go</a>, <a href="https://www.typescriptlang.org/">TypeScript</a>, <a href="https://kotlinlang.org/">Kotlin</a>, <a href="https://developer.apple.com/swift/">Swift</a> and <a href="https://www.java.com/en/">Java</a>.</li>
</ul>
<h2 id="ecosystem">Ecosystem</h2>
<p>In addition to these features, Connect RPC is built on top of the Buf ecosystem, which offers notable benefits:</p>
<ul>
<li><a href="https://buf.build/blog/connect-rpc-joins-cncf">Connect RPC joins CNCF</a>, entering the cloud-native ecosystem, which is great for the long term sustainability of the project.</li>
<li><a href="https://buf.build/product/bsr">Buf Schema Registry</a>, which is a great tool for managing, sharing and versioning your API schemas.</li>
<li><a href="https://buf.build/product/cli">Buf CLI</a>, a handy all in one tool for managing your APIs, generating code and linting.</li>
</ul>
<h2 id="recommended-interceptor-packages">Recommended Interceptor Packages</h2>
<p>Some handy Go packages that provide pre-built Connect RPC interceptors worth exploring or using as a starting point:</p>
<ul>
<li><a href="https://github.com/connectrpc/authn-go">authn-go</a>, provides a rebuilt authentication middleware library for Go. It works with any authentication scheme (including HTTP basic authentication, cookies, bearer tokens, and mutual TLS).</li>
<li><a href="https://github.com/connectrpc/validate-go">validate-go</a> provides a Connect RPC interceptor that takes the tedium out of data validation. This package is powered by <a href="https://github.com/bufbuild/protovalidate-go">protovalidate</a>
and the <a href="https://github.com/google/cel-spec">Common Expression Language</a>.</li>
<li><a href="https://github.com/mdigger/rpclog">rpclog</a> provides a structured logging interceptor for Connect RPC with support for both unary and streaming RPCs.</li>
</ul>
<h2 id="summary">Summary</h2>
<ol>
<li>
<p>Connect RPC provides a paved and well maintained path to building gRPC compatible APIs, while maintaining compatibility for HTTP/1.1 clients. This is invaluable for product teams that need to support multiple client types without building custom compatibility layers.</p>
</li>
<li>
<p>Using a mature library like Connect RPC, you get to benefit from all the prebuilt integrations, and the added capabilities of the Buf ecosystem. This makes publishing and consuming APIs a breeze.</p>
</li>
<li>
<p>Protobuf schemas, high performance serialisation and compression ensure you get robust and efficient APIs.</p>
</li>
</ol>
<h2 id="conclusion">Conclusion</h2>
<p>Connect RPC makes it easy to build high-performance, robust APIs with gRPC compatibility, while avoiding the complexity of building and maintaining custom compatibility layers.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Why OIDC?</title>
      <link>https://www.wolfe.id.au/2025/11/16/why-oidc/</link>
      <pubDate>Sun, 16 Nov 2025 08:55:22 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2025/11/16/why-oidc/</guid>
      <description>&lt;p&gt;Over the last few years there has been a push away from using machine identity for continuous integration (CI) agents, or runners, and instead use a more targeted, least privileged approach to authentication and authorization. This is where &lt;a href=&#34;https://openid.net/developers/how-connect-works/&#34;&gt;OIDC (OpenID Connect)&lt;/a&gt; comes in, which is a method of authentication used to bridge between the CI provider and cloud services such as AWS, Azure, and Google Cloud.&lt;/p&gt;
&lt;p&gt;In this model the CI provider acts as an identity provider, issuing tokens to the CI runner/agent which include a set of claims identifying the owner, pipeline, workflow and job that is being executed. This is then used to authenticate with the cloud service, and access the resources that the pipeline, workflow and job require.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Over the last few years there has been a push away from using machine identity for continuous integration (CI) agents, or runners, and instead use a more targeted, least privileged approach to authentication and authorization. This is where <a href="https://openid.net/developers/how-connect-works/">OIDC (OpenID Connect)</a> comes in, which is a method of authentication used to bridge between the CI provider and cloud services such as AWS, Azure, and Google Cloud.</p>
<p>In this model the CI provider acts as an identity provider, issuing tokens to the CI runner/agent which include a set of claims identifying the owner, pipeline, workflow and job that is being executed. This is then used to authenticate with the cloud service, and access the resources that the pipeline, workflow and job require.</p>
<p>In simple terms, this is a form of trust delegation, where the CI provider is trusted by the cloud service to issue tokens on behalf of the owner, pipeline, workflow and job.</p>
<h2 id="how-oidc-works">How OIDC Works</h2>
<p>The OIDC trust delegation flow is as follows:</p>
<pre class="mermaid">sequenceDiagram
    participant CI as CI Provider&lt;br/&gt;(Identity Provider)
    participant Runner as CI Runner/Agent
    participant Cloud as Cloud Service&lt;br/&gt;(AWS/Azure/GCP)

    Note over CI,Cloud: OIDC Trust Delegation Flow

    CI-&gt;&gt;Runner: Issue OIDC token with claims&lt;br/&gt;(pipeline, workflow, job)
    Runner-&gt;&gt;Cloud: Request access with OIDC token
    Cloud-&gt;&gt;Cloud: Verify token signature&lt;br/&gt;and validate claims
    Cloud-&gt;&gt;Runner: Grant temporary credentials
    Runner-&gt;&gt;Cloud: Access resources with credentials

    Note over CI,Cloud: Trust established via OIDC configuration
</pre>
<p>There are a few things to note:</p>
<ul>
<li>When using OIDC, the runner doesn&rsquo;t need to be registered with the cloud service; it is granted access via the OIDC token.</li>
<li>The OIDC token is cryptographically signed by the CI provider, and the cloud service verifies the signature to ensure the token is valid.</li>
<li>In this model all three parties (CI provider, runner, and cloud service) are trusted to issue and verify tokens.</li>
</ul>
<h2 id="limiting-cloud-access-to-the-agentrunner">Limiting Cloud Access to the Agent/Runner</h2>
<p>To ensure the CI provider can&rsquo;t access the cloud service directly, you can add conditions which ensure only the runner/agent is allowed to access the cloud resources.</p>
<p>On top of this cloud providers such as AWS have conditions which can <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-network-properties">restrict access to a specific AWS network resource, such as a VPC</a>. I recommend familiarizing yourself with the documentation for your cloud provider to understand how to lock down access to the runner/agent.</p>
<h2 id="benefits-of-oidc">Benefits of OIDC</h2>
<p>The reasons this is useful are:</p>
<ul>
<li>It provides a more secure and flexible approach to authentication and authorization</li>
<li>It limits the scope of the token to the specific pipeline, workflow, and job</li>
<li>It is tied to the lifecycle of the pipeline, workflow, and job, which means the token is limited to the duration of that execution</li>
<li>It is more flexible than using machine identity for CI runners/agents as it allows for more granular control over the permissions granted to the runner/agent</li>
</ul>
<h2 id="ephemeral-runnersagents">Ephemeral Runners/Agents</h2>
<p>Ephemeral runners/agents are short lived, single job, or single workflow runners/agents which are created before the workflow, or job is started. They provide a more secure and flexible approach to job execution as there is no need to worry about these environments being tainted by previous jobs or workflows.</p>
<p>When paired with OIDC these environments provide an extra layer of security as they are destroyed after the job or workflow is complete, further reducing the risk of cross job or workflow access.</p>
<h2 id="summary">Summary</h2>
<p>So in summary, OIDC provides a more secure and flexible approach to access management for CI projects, and it is particularly useful when paired with ephemeral runners/agents.</p>
<p>The biggest advantage of this approach is that it allows engineers to focus on the access required by the pipeline, workflow, and job, rather than having to manage machine identities and permissions for each runner/agent.</p>
<p>One of the interesting things about this approach is that you&rsquo;re not limited to using OIDC just with cloud providers; you can use it with your own services as well. By using OIDC libraries such as <a href="https://github.com/coreos/go-oidc">github.com/coreos/go-oidc</a>, you can implement APIs which can use the identity of CI pipelines, workflows, and jobs. An example of this is <a href="https://www.hashicorp.com/en/resources/using-oidc-with-hashicorp-vault-and-github-actions">Using OIDC With HashiCorp Vault and GitHub Actions</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://buildkite.com/docs/pipelines/security/oidc">OIDC for Buildkite</a></li>
<li><a href="https://docs.github.com/en/enterprise-cloud@latest/actions/concepts/security/openid-connect">OIDC for GitHub Actions</a></li>
<li><a href="https://docs.gitlab.com/integration/openid_connect_provider/">OIDC for GitLab</a></li>
</ul>
]]></content:encoded>
    </item>
    
    <item>
      <title>Getting started with AI for developers</title>
      <link>https://www.wolfe.id.au/2023/12/16/getting-started-with-ai-for-developers/</link>
      <pubDate>Sat, 16 Dec 2023 08:55:22 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2023/12/16/getting-started-with-ai-for-developers/</guid>
      <description>&lt;p&gt;As a software developer, I have seen a lot of changes over the years, however few have been as drastic as the rise of artificial intelligence. There are a growing list of tools and services using this technology to help developers with day to day tasks, and speed up their work, however few of these tools help them understand how this technology works, and what it can do. So I wanted to share some of my own tips on how to get started with AI.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As a software developer, I have seen a lot of changes over the years, however few have been as drastic as the rise of artificial intelligence. There are a growing list of tools and services using this technology to help developers with day to day tasks, and speed up their work, however few of these tools help them understand how this technology works, and what it can do. So I wanted to share some of my own tips on how to get started with AI.</p>
<p>The aim of this exercise is to help develop some intuition of how AI works, and how it can be used to help in your day-to-day tasks, while hopefully discovering ways to use it in future applications you build.</p>
<h2 id="getting-started">Getting Started</h2>
<p>As the common saying that originated from a Chinese proverb says.</p>
<blockquote>
<p>A journey of a thousand miles begins with a single step.</p>
</blockquote>
<p>To kick off your understanding of AI I recommend you select a coding assistant and start using it on your personal, or side projects, this will provide you with a better understanding of how it succeeds, and sometimes fails. Building this knowledge up will help you develop an understanding of strengths and weaknesses as a user.</p>
<p>I personally recommend getting started with <a href="https://about.sourcegraph.com/cody">Cody</a> as it is a great tool, and is free for personal use, while also being open source itself. The developers of Cody are very open and helpful, and have a great community of users, while also sharing their own experiences while building the tool.</p>
<p>Cody is more than just a code completion tool, you can ask it questions and get it to summarise and document your code, and even generate test cases. Make sure you explore all the options, again to build up more knowledge of how these AI tools work.</p>
<p>And most importantly be curios, and explore every corner of the tool.</p>
<h2 id="diving-into-llms">Diving Into LLMs</h2>
<p>Next, I recommend you start experimenting with some of the open source large language models (LLMs) using tools such as <a href="https://ollama.ai/">ollama</a> to allow you to download, run and experiment with the software. To get started with this tool, you can follow the quick start in the <code>README.md</code> hosted at <a href="https://github.com/jmorganca/ollama">https://github.com/jmorganca/ollama</a>. Also there is a great intro by Sam Witteveen <a href="https://www.youtube.com/watch?v=Ox8hhpgrUi0&amp;t=2s">Ollama - Local Models on your machine</a> which I highly recommend.</p>
<h2 id="what-is-a-large-language-model">What is a large language model?</h2>
<p>Here is a quote from <a href="https://en.wikipedia.org/wiki/Large_language_model">wikipedia on what a large language model</a> is:</p>
<blockquote>
<p>A large language model (LLM) is a large scale language model notable for its ability to achieve general-purpose language understanding and generation. LLMs acquire these abilities by using massive amounts of data to learn billions of parameters during training and consuming large computational resources during their training and operation. LLMs are artificial neural networks (mainly <a href="https://en.wikipedia.org/wiki/Transformer_%28machine_learning_model%29">transformers</a> and are (pre)trained using self-supervised learning and semi-supervised learning.</p>
</blockquote>
<h2 id="why-open-llms">Why Open LLMs?</h2>
<p>I prefer to learn from the open LLMs for the following reasons:</p>
<ol>
<li>They have a great community of developers and users, who share information about the latest developments.</li>
<li>You get a broader range of models, and can try them out and see what they do.</li>
<li>You can run them locally with your data, and see what they do without some of the privacy concerns of cloud based services.</li>
<li>You have the potential to fine tune them to your data, and improve the performance.</li>
</ol>
<p>I keep up with the latest developments I use <a href="https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard">hugging face open llm leader board</a>, as they have been doing a lot of work on large language models, and have a great community of users. When the latest models are posted they are also sharing their experiences, and fine tuned versions via their <a href="https://huggingface.co/blog">blog</a>, which is great resource. Notable models are normally added to ollama after a day or so, so you can try them out and see what they do.</p>
<p>There are a number of different types of LLMs, each with their own strengths and weaknesses. I personally like to experiment with the chat bot models, as they are very simple to use, and are easy to interface with via olloma. An example of one of these from the hugging face site is <a href="https://huggingface.co/mistralai/Mistral-7B-v0.1">https://huggingface.co/mistralai/Mistral-7B-v0.1</a> which is a chat bot model trained by the <a href="https://mistral.ai/">Mistral AI</a> team.</p>
<p>To get started with this model you can follow the instructions at <a href="https://ollama.ai/library/mistral">https://ollama.ai/library/mistral</a>, download and run the model locally.</p>
<h2 id="pick-a-scenario-to-test">Pick a Scenario To Test</h2>
<p>My scenario relates to my current role, and covers questions which my team encounters on a day-to-day basis. As a team we are providing advice to a customers about how to improve the operational readiness and security posture for internally developed applications. This is a common scenario for many companies, where applications are developed to provide a proof of concept, and are then deployed to a production environment without the supporting processes in place.</p>
<p>What is approach is helpful as:</p>
<ol>
<li>This is a scenario I can relate to, and can use my existing knowledge to review the results.</li>
<li>This is a scenario which is not too complex, and can be used to demonstrate the concepts.</li>
<li>This is a scenario which will provide me value while I am learning how to use the tools.</li>
</ol>
<h2 id="building-a-list-of-questions">Building a list of questions</h2>
<p>Once you have a scenario, you can draft a list of questions which you can start testing them with models, this will help you understand how the models work, and how they can be to support a team or business unit, while also learning how to use them.</p>
<p>The questions I am currently using mainly focus on DevOps, and SRE processes, paired with a dash of <a href="https://aws.amazon.com/">AWS</a> security and terraform questions.</p>
<h3 id="i-need-to-create-a-secure-environment-in-and-aws-account-where-should-i-start">I need to create a secure environment in and AWS Account, where should I start?</h3>
<p>This question is really common for developers starting out in AWS, it is quite broad and I am mostly expecting a high level overview of how to create a secure environment, and how to get started.</p>
<h3 id="how-would-i-create-an-encrypted-secure-s3-bucket-using-terraform">How would I create an encrypted secure s3 bucket using terraform?</h3>
<p>This question is a bit more specific, focusing on a single AWS service, while also adding a few specific requirements. Models like Mistral will provide a step by step guide on how to achieve this, while others will provide the terraform code to achieve this.</p>
<h3 id="i-need-to-create-an-application-risk-management-program-where-should-i-start">I need to create an Application Risk Management Program, where should I start?</h3>
<p>This question is quite common if your working in a company which doesn&rsquo;t have a long history with internal software development, or a team that is trying to ensure they cover the risks of their applications.</p>
<h3 id="what-is-a-good-sre-incident-process-for-a-business-application">What is a good SRE incident process for a business application?</h3>
<p>This question is also quite broad, but includes Site Reliability Engineering (SRE) as a keyword, so I am expecting an answer which aligns with the principals of this movement.</p>
<h2 id="what-is-a-good-checklist-for-a-serverless-developer-who-wants-to-improve-the-monitoring-of-their-applications">What is a good checklist for a serverless developer who wants to improve the monitoring of their applications?</h2>
<p>This is a common question asked by people who are just getting started with serverless and are interested in, or have been asked to improve the monitoring of their applications.</p>
<h2 id="whats-next">Whats Next?</h2>
<p>So now that you have a scenario and a few questions I recommend you do the following:</p>
<ol>
<li>Try a couple of other models, probably <a href="https://ollama.ai/library/llama2">llama2</a> and <a href="https://ollama.ai/library/orca2">orca</a> are a good starting point.</li>
<li>Learn a bit about prompting by following <a href="https://replicate.com/blog/how-to-prompt-llama">A guide to prompting Llama 2</a> from the replicate blog.</li>
<li>Apply the prompts to your ollama model using a <a href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md">modelfile</a>, which is similar to a <a href="https://docs.docker.com/engine/reference/builder/">Dockerfile</a>.</li>
<li>Try out an uncensored model, something like <a href="https://ollama.ai/library/llama2-uncensored">llama2-uncensored</a> and run through your questions, then ask about breaking into cars or killing processes, which can be a problematic question in some censored models. It is good to understand what censoring a model does, as it can be a useful tool for understanding the risks of using a model.</li>
<li>Start reading more about <a href="https://github.com/premAI-io/state-of-open-source-ai">The State of Open Source AI (2023 Edition)</a>.</li>
</ol>
<h2 id="further-research">Further Research</h2>
<p>Now that you are dabbling with LLMs, and AI, I recommend you try these models for the odd question in your day-to-day work, the local ones running in ollama are restively safe, and they can save you a lot of work.</p>
<p>Also try similar questions with services such as <a href="https://chat.openai.com/">https://chat.openai.com/</a>, hosted services are a powerful tool for adhoc testing and learning. Just be aware of data privacy, and security when using these services.</p>
<p>Once you have some experience you will hopefully even incorporate a model into work projects such as data cleansing, summarizations, or processing of user feedback to help you improve your applications. For this you can use services such as <a href="https://aws.amazon.com/bedrock/">AWS Bedrock</a> on AWS, or <a href="https://cloud.google.com/generative-ai-studio">Generative AI Studio</a> on Google cloud, while following the same methodology to evaluate and select a model for your use case.</p>
<p>If your intrigued and want to go even deeper than these APIs, I recommend you dive into some of the amazing resources on the web for learning how AI and LLMs work, and possibly even develop, or fine tune your own own models.</p>
<ul>
<li><a href="https://www.fast.ai/">fast.ia</a> which provides some great online self paced learning on AI.</li>
<li><a href="https://www.youtube.com/watch?v=zjkBMFhNj_g">A busy persons intro to LLMs</a> great lecture on LLMs.</li>
<li><a href="http://introtodeeplearning.com/">MIT Introduction to Deep Learning</a> for those who want to dive deeper and prefer more of a structured course.</li>
<li><a href="https://www.youtube.com/watch?v=jkrNMKz9pWU">A Hackers&rsquo; Guide to Language Models</a>, another great talk by Jeremy Howard of fast.ai.</li>
</ul>
]]></content:encoded>
    </item>
    
    <item>
      <title>Avoid accidental exposure of authenticated Amazon API Gateway resources</title>
      <link>https://www.wolfe.id.au/2023/11/12/avoid-accidental-exposure-of-authenticated-amazon-api-gateway-resources/</link>
      <pubDate>Sun, 12 Nov 2023 08:55:22 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2023/11/12/avoid-accidental-exposure-of-authenticated-amazon-api-gateway-resources/</guid>
      <description>&lt;p&gt;I have been working with &lt;a href=&#34;https://aws.amazon.com/api-gateway/&#34;&gt;Amazon API Gateway&lt;/a&gt; for a while and one thing I noticed is there are a few options for authentication, which can be confusing to developers, and lead to security issues. This post will cover one of the common security pitfalls with API Gateway and how to mitigate it.&lt;/p&gt;
&lt;p&gt;If your using &lt;code&gt;AWS_IAM&lt;/code&gt; authentication on an API Gateway, then make sure you set the default authorizer for all API resources. This will avoid accidental exposing an API if you mis-configure, or omit an authentication method for an API resource as the default is &lt;code&gt;None&lt;/code&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been working with <a href="https://aws.amazon.com/api-gateway/">Amazon API Gateway</a> for a while and one thing I noticed is there are a few options for authentication, which can be confusing to developers, and lead to security issues. This post will cover one of the common security pitfalls with API Gateway and how to mitigate it.</p>
<p>If your using <code>AWS_IAM</code> authentication on an API Gateway, then make sure you set the default authorizer for all API resources. This will avoid accidental exposing an API if you mis-configure, or omit an authentication method for an API resource as the default is <code>None</code>.</p>
<p>In addition to this there is a way to apply a resource policy to an API Gateway, which will enforce a specific iam access check on all API requests. Combining the override to default authorizer, and the resource policy allows us to apply multiply layers of protection to our API, allowing us to follow the principle of defense in depth.</p>
<p>So to summarise, to protect your API with IAM authentication is as follows:</p>
<ol>
<li>Enable a default authorizer method on the API Gateway resource.</li>
<li>Enable an authentication method on the API.</li>
<li>Assign an API resource policy which requires IAM authentication to access the API.</li>
</ol>
<p>Doing this with <a href="https://aws.amazon.com/serverless/sam/">AWS SAM</a> is fairly straight forward, to read more about it see the <a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-property-api-apiauth.html">SAM ApiAuth documentation</a>.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w">  </span><span class="nt">AthenaWorkflowApi</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Type</span><span class="p">:</span><span class="w"> </span><span class="l">AWS::Serverless::Api</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Properties</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="l">...</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">Auth</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="c"># Specify a default authorizer for the API Gateway API to protect against missing configuration</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">DefaultAuthorizer</span><span class="p">:</span><span class="w"> </span><span class="l">AWS_IAM</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="c"># Configure Resource Policy for all methods and paths on an API as an extra layer of protection</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">ResourcePolicy</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="c"># The AWS accounts to allow</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">AwsAccountWhitelist</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">           </span>- !<span class="l">Ref AWS::AccountId</span><span class="w">
</span></span></span></code></pre></div><p>Through the magic of AWS SAM this results in a resource policy which looks like the following, this results in all the API methods being protected and only accessible by users authenticated to this account, and only where they are granted access via an IAM policy.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-json" data-lang="json"><span class="line"><span class="cl"><span class="p">{</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&#34;Version&#34;</span><span class="p">:</span> <span class="s2">&#34;2012-10-17&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&#34;Statement&#34;</span><span class="p">:</span> <span class="p">[</span>
</span></span><span class="line"><span class="cl">    <span class="p">{</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&#34;Effect&#34;</span><span class="p">:</span> <span class="s2">&#34;Allow&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&#34;Principal&#34;</span><span class="p">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;AWS&#34;</span><span class="p">:</span> <span class="s2">&#34;arn:aws:iam::123456789012:root&#34;</span>
</span></span><span class="line"><span class="cl">      <span class="p">},</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&#34;Action&#34;</span><span class="p">:</span> <span class="s2">&#34;execute-api:Invoke&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&#34;Resource&#34;</span><span class="p">:</span> <span class="s2">&#34;arn:aws:execute-api:us-west-2:123456789012:abc123abc1/Prod/POST/athena/run_s3_query_template&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="p">},</span>
</span></span><span class="line"><span class="cl">    <span class="p">{</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&#34;Effect&#34;</span><span class="p">:</span> <span class="s2">&#34;Allow&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&#34;Principal&#34;</span><span class="p">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;AWS&#34;</span><span class="p">:</span> <span class="s2">&#34;arn:aws:iam::123456789012:root&#34;</span>
</span></span><span class="line"><span class="cl">      <span class="p">},</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&#34;Action&#34;</span><span class="p">:</span> <span class="s2">&#34;execute-api:Invoke&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&#34;Resource&#34;</span><span class="p">:</span> <span class="s2">&#34;arn:aws:execute-api:us-west-2:123456789012:abc123abc1/Prod/POST/athena/run_query_template&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="p">}</span>
</span></span><span class="line"><span class="cl">  <span class="p">]</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span>
</span></span></code></pre></div><p>I typically use an openapi spec to define the API, using the extensions provided by AWS such as <code>x-amazon-apigateway-auth</code> to define the authorisation.</p>
<p>With the default authentication set to <code>AWS_IAM</code> hitting an API which is missing <code>x-amazon-apigateway-auth</code> using curl returns the following error.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-json" data-lang="json"><span class="line"><span class="cl"><span class="p">{</span><span class="nt">&#34;message&#34;</span><span class="p">:</span><span class="s2">&#34;Missing Authentication Token&#34;</span><span class="p">}</span>
</span></span></code></pre></div><p>With default authentication disabled, and the resource policy enabled the API returns the following error, which illustrates the principle of defense in depth.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-json" data-lang="json"><span class="line"><span class="cl"><span class="p">{</span><span class="nt">&#34;Message&#34;</span><span class="p">:</span><span class="s2">&#34;User: anonymous is not authorized to perform: execute-api:Invoke on resource: arn:aws:execute-api:us-east-1:********9012:abc123abc1/Prod/POST/athena/run_query_template&#34;</span><span class="p">}</span>
</span></span></code></pre></div>]]></content:encoded>
    </item>
    
    <item>
      <title>Stop using IAM User Credentials with Terraform Cloud</title>
      <link>https://www.wolfe.id.au/2023/07/17/stop-using-iam-user-credentials-with-terraform-cloud/</link>
      <pubDate>Mon, 17 Jul 2023 07:55:22 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2023/07/17/stop-using-iam-user-credentials-with-terraform-cloud/</guid>
      <description>&lt;p&gt;I recently started using &lt;a href=&#34;https://www.terraform.io/&#34;&gt;Terraform Cloud&lt;/a&gt; but discovered that the &lt;a href=&#34;https://developer.hashicorp.com/terraform/tutorials/cloud-get-started/cloud-create-variable-set#create-a-variable-set&#34;&gt;getting started tutorial&lt;/a&gt; which describes how to integrate it with &lt;a href=&#34;https://aws.amazon.com/&#34;&gt;Amazon Web Services (AWS)&lt;/a&gt; suggested using &lt;a href=&#34;https://aws.amazon.com/iam/features/managing-user-credentials/&#34;&gt;IAM user credentials&lt;/a&gt;. This is not ideal as these credentials are long-lived and can lead to security issues.&lt;/p&gt;
&lt;h2 id=&#34;what-is-the-problem-with-iam-user-credentials&#34;&gt;What is the problem with IAM User Credentials?&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;IAM User Credentials are long lived, meaning once compromised they allow access for a long time&lt;/li&gt;
&lt;li&gt;They are static, so if leaked it is difficult to revoke access immediately&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;But there are better alternatives, the one I recommend is &lt;a href=&#34;https://openid.net/developers/how-connect-works/&#34;&gt;OpenID Connect (OIDC)&lt;/a&gt;, which if you dig deep into the Terraform Cloud docs is a supported approach. This has a few benefits:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently started using <a href="https://www.terraform.io/">Terraform Cloud</a> but discovered that the <a href="https://developer.hashicorp.com/terraform/tutorials/cloud-get-started/cloud-create-variable-set#create-a-variable-set">getting started tutorial</a> which describes how to integrate it with <a href="https://aws.amazon.com/">Amazon Web Services (AWS)</a> suggested using <a href="https://aws.amazon.com/iam/features/managing-user-credentials/">IAM user credentials</a>. This is not ideal as these credentials are long-lived and can lead to security issues.</p>
<h2 id="what-is-the-problem-with-iam-user-credentials">What is the problem with IAM User Credentials?</h2>
<ul>
<li>IAM User Credentials are long lived, meaning once compromised they allow access for a long time</li>
<li>They are static, so if leaked it is difficult to revoke access immediately</li>
</ul>
<p>But there are better alternatives, the one I recommend is <a href="https://openid.net/developers/how-connect-works/">OpenID Connect (OIDC)</a>, which if you dig deep into the Terraform Cloud docs is a supported approach. This has a few benefits:</p>
<ol>
<li>Credentials are dynamically created for each run, so if one set is compromised it does not affect other runs.</li>
<li>When Terraform Cloud authenticates with AWS using OIDC it will pass information about the project and run, so you can enforce IAM policies based on this context.</li>
<li>Credentials are short lived, expiring after the Terraform run completes.</li>
<li>You can immediately revoke access by removing the OIDC provider from AWS.</li>
<li>You don’t need to export credentials from AWS and manage their rotation.</li>
</ol>
<p>Overall this allows for a more secure and scalable approach to integrating Terraform Cloud with AWS. If you are just starting out, I would recommend setting up OpenID Connect integration instead of using IAM credentials.</p>
<h2 id="aws-deployment">AWS Deployment</h2>
<p>To setup the resources on the AWS side required to link AWS to Terraform Cloud we need to deploy some resources, in my case I am using a Cloudformation Template which deploy manually. You can find the source code to this template in my <a href="https://github.com/wolfeidau/terraform-cloud-aws-blog">GitHub Repo</a> along with a Terraform example to deploy the resources.</p>
<p>Using the Cloudformation template as the example for this post, it creates:</p>
<ol>
<li>IAM Role, which assumed by Terraform Cloud when deploying</li>
<li>Open ID Connect Provider, which is used to connect Terraform Cloud to AWS</li>
</ol>
<p>The Terraform Deployment role is as follows:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w">  </span><span class="nt">TerraformDeploymentRole</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Type</span><span class="p">:</span><span class="w"> </span><span class="l">AWS::IAM::Role</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Properties</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">AssumeRolePolicyDocument</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">Statement</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span>- <span class="nt">Effect</span><span class="p">:</span><span class="w"> </span><span class="l">Allow</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">Action</span><span class="p">:</span><span class="w"> </span><span class="l">sts:AssumeRoleWithWebIdentity</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">Principal</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">Federated</span><span class="p">:</span><span class="w"> </span>!<span class="l">Ref TerraformOIDCProvider</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">Condition</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">StringEquals</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">app.terraform.io:aud</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;aws.workload.identity&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">StringLike</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">app.terraform.io:sub</span><span class="p">:</span><span class="w"> </span>!<span class="l">Sub organization:${OrganizationName}:project:${ProjectName}:workspace:${WorkspaceName}:run_phase:*</span><span class="w">
</span></span></span></code></pre></div><p><strong>Note:</strong></p>
<ul>
<li>The IAM role allows Terraform Cloud to assume the role using the OIDC provider, and limits it to the given organization, project and workspace names.</li>
<li>The policy attached to this role, in my example, only allows Terraform to list s3 buckets; you should customise this based on your needs.</li>
</ul>
<p>The Open ID Connect Provider is created as follows:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w">  </span><span class="nt">TerraformOIDCProvider</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Type</span><span class="p">:</span><span class="w"> </span><span class="l">AWS::IAM::OIDCProvider</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Properties</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">Url</span><span class="p">:</span><span class="w"> </span><span class="l">https://app.terraform.io</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">ClientIdList</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span>- <span class="l">aws.workload.identity</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">ThumbprintList</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span>- <span class="l">9e99a48a9960b14926bb7f3b02e22da2b0ab7280</span><span class="w">
</span></span></span></code></pre></div><p>Once deployed this template will provide two outputs:</p>
<ol>
<li>The role ARN for the Terraform Deployment role.</li>
<li>An Optional Audience value, this is only needed if you want to customise this value.</li>
</ol>
<h2 id="terraform-cloud-configuration">Terraform Cloud Configuration</h2>
<p>You’ll need to set a couple of environment variables in your Terraform Cloud workspace in order to authenticate with AWS using OIDC. You can set these as workspace variables, or if you’d like to share one AWS role across multiple workspaces, you can use a variable set.</p>
<table>
  <thead>
      <tr>
          <th>Variable</th>
          <th>Value</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>TFC_AWS_PROVIDER_AUTH</td>
          <td><code>true</code></td>
      </tr>
      <tr>
          <td>TFC_AWS_RUN_ROLE_ARN</td>
          <td>The role ARN from the cloudformation stack outputs</td>
      </tr>
      <tr>
          <td>TFC_AWS_WORKLOAD_IDENTITY_AUDIENCE</td>
          <td>The optional audience value from the stack outputs. Defaults to <code>aws.workload.identity</code>.</td>
      </tr>
  </tbody>
</table>
<p>Note for more advanced configuration options please refer to <a href="https://developer.hashicorp.com/terraform/cloud-docs/workspaces/dynamic-provider-credentials/aws-configuration">Terraform Cloud - Dynamic Credentials with the AWS Provider</a>.</p>
<p>That is it, your now ready to run plans in your Terraform Cloud workspace!</p>
<h2 id="auditing">Auditing</h2>
<p>Once you have setup both side of this solution you should be able to see events in <a href="https://aws.amazon.com/cloudtrail/">AWS CloudTrail</a>, filter by service <code>sts.amazonaws.com</code> and look at the <code>AssumeRoleWithWebIdentity</code> events. Each event will contain a record of the Terraform Cloud run, and the name of the project and workspace.</p>
<p>This is a cut down cloudtrail event showing the key information:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-json" data-lang="json"><span class="line"><span class="cl"><span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;userIdentity&#34;</span><span class="p">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;type&#34;</span><span class="p">:</span> <span class="s2">&#34;WebIdentityUser&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;principalId&#34;</span><span class="p">:</span> <span class="s2">&#34;arn:aws:iam::12121212121212:oidc-provider/app.terraform.io:aws.workload.identity:organization:test-organization:project:Default Project:workspace:test-terraform-cloud:run_phase:plan&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;userName&#34;</span><span class="p">:</span> <span class="s2">&#34;organization:test-organization:project:Default Project:workspace:test-terraform-cloud:run_phase:plan&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;identityProvider&#34;</span><span class="p">:</span> <span class="s2">&#34;arn:aws:iam::12121212121212:oidc-provider/app.terraform.io&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="p">},</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;eventTime&#34;</span><span class="p">:</span> <span class="s2">&#34;2023-07-18T00:08:34Z&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;eventSource&#34;</span><span class="p">:</span> <span class="s2">&#34;sts.amazonaws.com&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;eventName&#34;</span><span class="p">:</span> <span class="s2">&#34;AssumeRoleWithWebIdentity&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;awsRegion&#34;</span><span class="p">:</span> <span class="s2">&#34;ap-southeast-2&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;sourceIPAddress&#34;</span><span class="p">:</span> <span class="s2">&#34;x.x.x.x&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;userAgent&#34;</span><span class="p">:</span> <span class="s2">&#34;APN/1.0 HashiCorp/1.0 Terraform/1.5.2 (+https://www.terraform.io) terraform-provider-aws/5.7.0 (+https://registry.terraform.io/providers/hashicorp/aws) aws-sdk-go-v2/1.18.1 os/linux lang/go/1.20.5 md/GOOS/linux md/GOARCH/amd64 api/sts/1.19.2&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;requestParameters&#34;</span><span class="p">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;roleArn&#34;</span><span class="p">:</span> <span class="s2">&#34;arn:aws:iam::12121212121212:role/terraform-cloud-oidc-acces-TerraformDeploymentRole-NOPE&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;roleSessionName&#34;</span><span class="p">:</span> <span class="s2">&#34;terraform-run-abc123&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="p">},</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;responseElements&#34;</span><span class="p">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;subjectFromWebIdentityToken&#34;</span><span class="p">:</span> <span class="s2">&#34;organization:test-organization:project:Default Project:workspace:test-terraform-cloud:run_phase:plan&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;assumedRoleUser&#34;</span><span class="p">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&#34;assumedRoleId&#34;</span><span class="p">:</span> <span class="s2">&#34;CDE456:terraform-run-abc123&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&#34;arn&#34;</span><span class="p">:</span> <span class="s2">&#34;arn:aws:sts::12121212121212:assumed-role/terraform-cloud-oidc-acces-TerraformDeploymentRole-NOPE/terraform-run-abc123&#34;</span>
</span></span><span class="line"><span class="cl">        <span class="p">},</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;provider&#34;</span><span class="p">:</span> <span class="s2">&#34;arn:aws:iam::12121212121212:oidc-provider/app.terraform.io&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;audience&#34;</span><span class="p">:</span> <span class="s2">&#34;aws.workload.identity&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="p">},</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;readOnly&#34;</span><span class="p">:</span> <span class="kc">true</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;eventType&#34;</span><span class="p">:</span> <span class="s2">&#34;AwsApiCall&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;recipientAccountId&#34;</span><span class="p">:</span> <span class="s2">&#34;12121212121212&#34;</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span>
</span></span></code></pre></div><h2 id="links">Links</h2>
<ul>
<li><a href="https://www.wiz.io/blog/how-to-get-rid-of-aws-access-keys-part-1-the-easy-wins">How to get rid of AWS access keys - Part 1: The easy wins</a></li>
<li><a href="https://developer.hashicorp.com/terraform/cloud-docs/workspaces/dynamic-provider-credentials/aws-configuration">Terraform Cloud - Dynamic Credentials with the AWS Provider</a></li>
<li><a href="https://aws.amazon.com/blogs/apn/simplify-and-secure-terraform-workflows-on-aws-with-dynamic-provider-credentials/">AWS Partner Network (APN) Blog - Simplify and Secure Terraform Workflows on AWS with Dynamic Provider Credentials</a></li>
</ul>
<p>So instead of using IAM User credentials, this approach uses IAM Roles and OpenID Connect to dynamically assign credentials to Terraform Cloud runs which is a big win from a security perspective!</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Automated Cloud Security Remediation</title>
      <link>https://www.wolfe.id.au/2023/02/19/automated-cloud-security-remediation/</link>
      <pubDate>Sun, 19 Feb 2023 11:00:00 +1000</pubDate>
      
      <guid>https://www.wolfe.id.au/2023/02/19/automated-cloud-security-remediation/</guid>
      <description>&lt;p&gt;Recently I have been looking into automated security remediation to understand its impacts, positive and negative. As I am a user of AWS, as well other cloud services, I was particularly interested in how it helped maintain security in these environments. As with anything, it is good to understand what problem it is trying to solve and why it exists in the first place.&lt;/p&gt;
&lt;h2 id=&#34;so-firstly-what-does-automated-security-remediation-for-a-cloud-service-do&#34;&gt;So firstly what does automated security remediation for a cloud service do?&lt;/h2&gt;
&lt;p&gt;This is software which detects threats, more specifically misconfigurations of services, and automatically remediates problems.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Recently I have been looking into automated security remediation to understand its impacts, positive and negative. As I am a user of AWS, as well other cloud services, I was particularly interested in how it helped maintain security in these environments. As with anything, it is good to understand what problem it is trying to solve and why it exists in the first place.</p>
<h2 id="so-firstly-what-does-automated-security-remediation-for-a-cloud-service-do">So firstly what does automated security remediation for a cloud service do?</h2>
<p>This is software which detects threats, more specifically misconfigurations of services, and automatically remediates problems.</p>
<h2 id="how-does-automated-security-remediation-work">How does automated security remediation work?</h2>
<p>Typically, security remediation tools take a feed of events from a service such as <a href="https://aws.amazon.com/cloudtrail/">AWS CloudTrail</a> (audit logging service) and checks the configuration of the resources being modified. This is typically paired with regular scheduled scans to ensure nothing is missed in the case of dropped or missing events.</p>
<h2 id="can-you-use-iam-to-avoid-security-misconfigurations-in-the-first-place">Can you use IAM to avoid security misconfigurations in the first place?</h2>
<p>Cloud services, such as AWS, have fairly complex <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> services which provide course grained security policy language called IAM policies. These policies are hard to fine tune for the myriad of security misconfigurations deployed by the people working on in these cloud services.</p>
<p>Everyone has seen something like the following administrator policy allowing all permissions for administrators of an AWS environments, this is fine for a &ldquo;sandbox&rdquo; learning account, but is far too permissive for production accounts.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">Version</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;2012-10-17&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">Statement</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">Sid</span><span class="p">:</span><span class="w"> </span><span class="l">AdminAccess</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Effect</span><span class="p">:</span><span class="w"> </span><span class="l">Allow</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Action</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;*&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Resource</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;*&#39;</span><span class="w">
</span></span></span></code></pre></div><p>That said, authoring IAM policies following the least privilege to cover current requirements, new services coming online and keeping up with emerging threats can be a significant cost in time and resources, and like at some point provide diminishing returns.</p>
<h2 id="can-you-use-aws-service-control-polices-scp-to-avoid-security-misconfigurations">Can you use AWS service control polices (SCP) to avoid security misconfigurations?</h2>
<p>In AWS there is another way to deny specific operations, this comes in the for of service control policies (SCP). These policies are a part of AWS Organizations and provide another layer of control above an account&rsquo;s IAM policies, allowing administrators to target specific operations and protect common resources. Again, these are also very complex to configure and maintain, as they use the same course grained security layer.</p>
<p>Below is an example SCP which prevents any VPC that doesn&rsquo;t already have internet access from getting it.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">Version</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;2012-10-17&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">Statement</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">Effect</span><span class="p">:</span><span class="w"> </span><span class="l">Deny</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">Action</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">ec2:AttachInternetGateway</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">ec2:CreateInternetGateway</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">ec2:CreateEgressOnlyInternetGateway</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">ec2:CreateVpcPeeringConnection</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">ec2:AcceptVpcPeeringConnection</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">globalaccelerator:Create*</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">globalaccelerator:Update*</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">Resource</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;*&#34;</span><span class="w">
</span></span></span></code></pre></div><p>Investment in SCPs is important for higher level controls, such as disabling the modification of security services such as <a href="https://aws.amazon.com/guardduty/">Amazon GuardDuty</a>, <a href="https://aws.amazon.com/config/">AWS Config</a> and <a href="https://aws.amazon.com/cloudtrail/">AWS CloudTrail</a> as changes to these services may result in data loss. That said, SCPs are still dependent on IAMs course grained policy language, which in turn is limited by the service&rsquo;s integration with IAM.</p>
<p>A note about SCPs, often you will see exclusions for roles which enable administrators, or continuous integration and delivery (CI\CD) systems to bypass these policies. These should be used for exceptional situations, for example bootstrapping of services, or incidents. So, using these roles should be gated via some sort incident response process.</p>
<h3 id="so-why-does-automated-security-remediation-exist">So why does automated security remediation exist?</h3>
<p>Given the complexity of managing fine-grained security policies, organizations implement a more reactive solution, which is often in the form of automated security remediation services.</p>
<h3 id="what-are-some-of-disadvantages-of-these-automated-security-remediation-tools">What are some of disadvantages of these automated security remediation tools?</h3>
<ul>
<li>False positives and false negatives: They may generate false positives, where legitimate actions are flagged as security threats, or false negatives, where actual security issues are missed.</li>
<li>Over-reliance on automation: Organizations may become over-reliant on tools, potentially leading to complacency or a lack of human oversight, which can create new risks and vulnerabilities.</li>
<li>Limited scope: They may not be able to detect or remediate all types of security issues or vulnerabilities, especially those that are highly complex or require a more nuanced approach.</li>
<li>Compliance and regulatory issues: Some compliance and regulatory frameworks may require manual security review or approval for certain types of security incidents, which can be challenging to reconcile with automated processes.</li>
<li>Cultural resistance: Some organizations may experience cultural resistance to automated remediation, as it may be perceived as a threat to job security or the role of security professionals.</li>
<li>Delayed or dropped trigger events: Automated remediation typically primarily depend on triggers from audit events provided, these events can be delayed in large AWS environments, or by a flood of activity.</li>
</ul>
<h2 id="what-are-some-of-the-positive-impacts-automated-remediation-tools">What are some of the positive impacts automated remediation tools?</h2>
<ul>
<li>Increased efficiency: Can reduce the time and resources required to respond to security incidents, allowing security teams to focus on higher-value tasks.</li>
<li>Improved collaboration: Can help break down silos between different teams, as it often requires cross-functional collaboration between security, operations, and development teams.</li>
<li>Reduced burnout: By automating repetitive and time-consuming tasks, automated remediation can help reduce burnout among security people, who may otherwise be overwhelmed by the volume of security incidents they need to respond to manually.</li>
<li>Skills development: As organizations adopt these tools and processes, security teams may need to develop new skills and competencies in areas such as automation, scripting, and orchestration, which can have positive impacts on employee development and job satisfaction.</li>
<li>Cultural shift towards proactive security: They can help shift the culture of security within an organization from reactive to proactive, by enabling security teams to identify and remediate potential security risks before they become actual security incidents.</li>
</ul>
<h1 id="summary">Summary</h1>
<p>Overall, while automated security remediation can have some cultural and productivity impacts that need to be managed, it can also bring significant benefits to organizations by enabling more efficient, collaborative, and proactive security practices.</p>
<p>That said, automated security remediation really needs to be part of a three-pronged approach:</p>
<ol>
<li>Ensure people are working in cloud environments with only the privileges they require to do their work. There are of course exceptions to this, but they should be covered with a process which allows users to request more access when required.</li>
<li>SCPs should be used to protect security and governance services, and implement core restrictions within a collection of AWS accounts, depending on your business.</li>
<li>Automated security remediation should be used to cover all the edge cases, again this should be used only where necessary, and with the understanding it may take a period of time to fix.</li>
</ol>
<p>One thing to note is we are working in an environment with a lot of smart and resourceful people, so organizations need to watch for situations complex workarounds evolve to mitigate ineffective or complex controls otherwise they may impact morale, onboarding of staff and overall success of a business.</p>
<p>Security works best when it balances threats and usability!</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>GitHub Actions supply chain attacks</title>
      <link>https://www.wolfe.id.au/2021/04/26/github-actions-supply-chain-attacks/</link>
      <pubDate>Mon, 26 Apr 2021 19:30:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2021/04/26/github-actions-supply-chain-attacks/</guid>
      <description>&lt;p&gt;There has been a lot of press about supply chain attacks recently, these type of attacks are nothing new and understanding them is really important for developers using services such as &lt;a href=&#34;https://github.com/features/actions&#34;&gt;GitHub Actions&lt;/a&gt;, given Continuos integration (CI) tools are a critical part of supply chain used in software projects.&lt;/p&gt;
&lt;p&gt;A supply chain attack targets less secure parts of the development process, this could be the tools and services you depend on, or the docker containers you host your software in. These attacks come in different forms but some examples are:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There has been a lot of press about supply chain attacks recently, these type of attacks are nothing new and understanding them is really important for developers using services such as <a href="https://github.com/features/actions">GitHub Actions</a>, given Continuos integration (CI) tools are a critical part of supply chain used in software projects.</p>
<p>A supply chain attack targets less secure parts of the development process, this could be the tools and services you depend on, or the docker containers you host your software in. These attacks come in different forms but some examples are:</p>
<ul>
<li>Extract credentials from your CI services like the <a href="https://about.codecov.io/security-update/">Codecov security incident</a>.</li>
<li>Seed malware for an attack down stream on your customers like the <a href="https://krebsonsecurity.com/tag/solarwinds-breach/">solarwinds breach</a>.</li>
</ul>
<p>In this post I am going to dive into an example of an attack that affected a lot of projects using GitHub Actions recently, but this could be applied more broadly to any CI tool or service relying on third party services or code.</p>
<h1 id="why-is-the-codecov-security-incident-interesting">Why is the Codecov security incident interesting?</h1>
<p>The <a href="https://about.codecov.io/security-update/">Codecov security incident</a> illustrates a novel attack on a popular developer tool, which in turn exposed a number of CI integrations including the widely used GitHub Actions.</p>
<p>The initial attack happened in January when the Codecov Bash uploader script was modified in a cloud storage service.</p>
<p>This script provides a language-agnostic alternative for sending your coverage reports to Codecov and is used at least 5 of the Codecov continuos integration (CI) integrations.</p>
<p>The GitHub Action Codecov was one of them, it downloads and executes the script each time it is run and critically didn&rsquo;t verify the checksum of this file against the release so it continued working.</p>
<p>The modified script extracted all environment variables in that workflow and uploaded them to a website operated by the attacker.</p>
<p>These variables are often used to pass credentials into the workflow for services such as <a href="https://hub.docker.com">Docker Hub</a>, <a href="https://www.npmjs.com/">NPM</a>, cloud storage buckets and other software distribution services.</p>
<p>The extraction of these credentials while this exploit was active could lead to the modifications of builds and other artifacts resulting in further exploits and extending the footprint of this attack.</p>
<p>Most concerning is this exploit was effectively sitting in the supply chain of <a href="https://github.com/search?l=&amp;q=codecov-action&#43;language%3AYAML&amp;type=code">1000s of open source</a> and proprietary workflows extracting data undetected for approximately 4 months.</p>
<p><strong>Note:</strong> It is worth reading the <a href="https://about.codecov.io/security-update/">security update</a> posted by Codecov as it highlights some of the steps you need to take if you are effected by this sort of attack.</p>
<h1 id="what-can-you-do-to-mitigate-these-sorts-of-attacks">What can you do to mitigate these sorts of attacks?</h1>
<p>To ensure your GitHub actions secure I recommend:</p>
<ul>
<li>Read the <a href="https://docs.github.com/en/actions/learn-github-actions/security-hardening-for-github-actions">GitHub actions hardening</a> documentation.</li>
<li>Limit exposure of secrets to only the projects and repositories which need these values by implementing <a href="https://github.blog/2021-04-13-implementing-least-privilege-for-secrets-in-github-actions/">least privilege for secrets in GitHub Actions</a>.</li>
<li>Read about <a href="https://securitylab.github.com/research/github-actions-untrusted-input/">Keeping your GitHub Actions and workflows secure: Untrusted input</a>.</li>
<li>Regularly rotate the credentials used in your GitHub actions. this helps mitigate historical backups or logs being leaked by a service.</li>
<li>If an action is supplied and supported by a vendor, ensure emails or advisories are sent to a shared email box, and not attached to a personal email. This will enable monitoring by more than one person, and enable you to go on holidays.</li>
</ul>
<p>For actions which have access to important secrets, like those used to upload your software libraries and releases, or deploying your services, you may want to fork them and add security scanning. This is more important if there are no vendor supported alternatives, or it is a less widely supported technology.</p>
<h1 id="reviewing-your-actions">Reviewing your actions</h1>
<p>Given we all still want the benefits of services such as GitHub Actions while also managing the risks we need to maintain a balance between getting the most out of the service and limiting possible exploits.</p>
<p>The first step is to review the GitHub actions your using in your workflows, just like you would for open source libraries:</p>
<ul>
<li>How active are these projects? Are PRs merged / reviewed in a timely manner?</li>
<li>Is the author known for building good quality software and build automation?</li>
<li>Are these actions supported by a company or service?</li>
<li>Does the project have a security policy?</li>
</ul>
<p>When it comes to open source GitHub Actions you need to be aware that most open source licenses limit liability for the author, this means you as a consumer need to actively manage some of the risks running this software. Performing maintenance, bug fixes and contributing to the upkeep of the software is key to ensuring the risk of exploits is minimized.</p>
<p>Lastly run some internal training or workshops around supply chain attacks in your company, this could involve running a scenario like the Codecov incident as a <a href="https://blog.rsisecurity.com/how-to-perform-a-security-incident-response-tabletop-exercise/">table top exercise</a>.</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>Why isn&#39;t my s3 bucket secure?</title>
      <link>https://www.wolfe.id.au/2020/10/08/why-isnt-my-s3-bucket-secure/</link>
      <pubDate>Thu, 08 Oct 2020 19:30:00 +1100</pubDate>
      
      <guid>https://www.wolfe.id.au/2020/10/08/why-isnt-my-s3-bucket-secure/</guid>
      <description>&lt;p&gt;We have all read horror stories of &lt;a href=&#34;https://aws.amazon.com/s3/&#34;&gt;Amazon Simple Storage Service&lt;/a&gt; (S3) buckets being “hacked” in the popular media, and we have seen lots of work by &lt;a href=&#34;https://aws.amazon.com&#34;&gt;Amazon Web Services&lt;/a&gt; (AWS) to tighten up controls and messaging around best practices. So how do the amazon tools help you avoid some of the pitfalls with S3?&lt;/p&gt;
&lt;p&gt;Case in point, the &lt;a href=&#34;https://aws.amazon.com/cli/&#34;&gt;AWS CLI&lt;/a&gt; which a large number of engineers and developers rely on every day, the following command will create a bucket.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We have all read horror stories of <a href="https://aws.amazon.com/s3/">Amazon Simple Storage Service</a> (S3) buckets being “hacked” in the popular media, and we have seen lots of work by <a href="https://aws.amazon.com">Amazon Web Services</a> (AWS) to tighten up controls and messaging around best practices. So how do the amazon tools help you avoid some of the pitfalls with S3?</p>
<p>Case in point, the <a href="https://aws.amazon.com/cli/">AWS CLI</a> which a large number of engineers and developers rely on every day, the following command will create a bucket.</p>
<pre tabindex="0"><code>$ aws s3 mb s3://my-important-data
</code></pre><p>One would assume this commonly referenced example which is used in a lot of the resources provided by AWS would create a bucket following the best practices. But alas no…</p>
<p>The configuration which is considered <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/security-best-practices.html">best practice for security of an S3 bucket</a> missing is:</p>
<ul>
<li>Enable Default Encryption</li>
<li>Block Public access configuration</li>
<li>Enforce encryption of data in transit (HTTPS)</li>
</ul>
<h2 id="why-is-this-a-problem">Why is this a Problem?</h2>
<p>I personally have a lot of experience teaching developers how to get started in AWS, and time and time again it is lax defaults which let this cohort down. Of course this happens a lot while they are just getting started.</p>
<p>Sure there are <a href="https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-cis-controls.html">guard rails</a> implemented using services such as AWS Security Hub, pointing out issues left right and center, but these typically identity problems which wouldn&rsquo;t be there in the first place if the tools where providing better defaults.</p>
<p>Sure there is more advanced configuration but <strong>encryption</strong> and blocking <strong>public access</strong> by default seem like a good start, and would reduce the noise of these tools.</p>
<p>The key point here is it should be hard for new developers to avoid these recommended, and recognised best practices when creating an S3 bucket.</p>
<p>In addition to this, keeping up with the ever growing list of “best practice” configuration is really impacting both velocity and morale of both seasoned, and those new the platform. Providing some tools which help developers keep up, and provide some uplift when upgrading existing infrastructure would be a boon.</p>
<p>Now this is especially the case for developers building solutions using <em>serverless</em> as they tend to use more of the AWS native services, and in turn trigger more of these “guard rails”.</p>
<p>Lastly there are a lot of developers out there who just don&rsquo;t have time to &ldquo;harden&rdquo; their environments, teams who have no choice but to ignore &ldquo;best practices&rdquo; and may benefit a lot from some uplift in this area.</p>
<h2 id="what-about-cloudformation">What about Cloudformation?</h2>
<p>To further demonstrate this issue this is s3 bucket creation in <a href="https://aws.amazon.com/cloudformation/">cloudformation</a>, which is the baseline orchestration tool for building resources, provided free of charge by AWS. This is a very basic example, as seen in a lot of projects on GitHub, and the <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket.html#aws-properties-s3-bucket--examples">AWS cloudformation documentation</a>.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w">      </span><span class="nt">MyDataBucket</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">Type</span><span class="p">:</span><span class="w"> </span><span class="l">AWS::S3::Bucket</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">Properties</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">BucketName</span><span class="p">:</span><span class="w"> </span><span class="l">MyDataBucket</span><span class="w">
</span></span></span></code></pre></div><p>Now you could argue that cloudformation is doing exactly what you tell it to do, it is just a primitive layer which translates YAML or JSON into API calls to AWS, but I think again this is really letting developers down.</p>
<p>Again this is missing default encryption, and public access safe guards. Now in addition to this a lot of quality tools also recommend the following:</p>
<ul>
<li>Explicit deny of Delete* operations, good practice for systems of record</li>
<li>Enable Versioning, optional but good practice for systems of record</li>
<li>Enable object access logging, which is omitted it to keep the example brief</li>
</ul>
<p>So this is a basic example with most of these options enabled, this is quite a lot to fill in for yourself.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w">     </span><span class="nt">MyDataBucket</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">Type</span><span class="p">:</span><span class="w"> </span><span class="l">AWS::S3::Bucket</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">DeletionPolicy</span><span class="p">:</span><span class="w"> </span><span class="l">Retain</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">UpdateReplacePolicy</span><span class="p">:</span><span class="w"> </span><span class="l">Retain</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">Properties</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">BucketName</span><span class="p">:</span><span class="w"> </span>!<span class="l">Ref BucketName</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">BucketEncryption</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">ServerSideEncryptionConfiguration</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span>- <span class="nt">ServerSideEncryptionByDefault</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                  </span><span class="nt">SSEAlgorithm</span><span class="p">:</span><span class="w"> </span><span class="l">AES256</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">VersioningConfiguration</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">Status</span><span class="p">:</span><span class="w"> </span><span class="l">Enabled</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">PublicAccessBlockConfiguration</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">BlockPublicAcls</span><span class="p">:</span><span class="w"> </span><span class="kc">True</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">BlockPublicPolicy</span><span class="p">:</span><span class="w"> </span><span class="kc">True</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">IgnorePublicAcls</span><span class="p">:</span><span class="w"> </span><span class="kc">True</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">RestrictPublicBuckets</span><span class="p">:</span><span class="w"> </span><span class="kc">True</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">MyDataBucketPolicy</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">Type</span><span class="p">:</span><span class="w"> </span><span class="l">AWS::S3::BucketPolicy</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">Properties</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">Bucket</span><span class="p">:</span><span class="w"> </span>!<span class="l">Ref MyDataBucket</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">PolicyDocument</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">Id</span><span class="p">:</span><span class="w"> </span><span class="l">AccessLogBucketPolicy</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">Version</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;2012-10-17&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">Statement</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span>- <span class="nt">Sid</span><span class="p">:</span><span class="w"> </span><span class="l">AllowSSLRequestsOnly</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">Action</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                  </span>- <span class="l">s3:*</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">Effect</span><span class="p">:</span><span class="w"> </span><span class="l">Deny</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">Resource</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                  </span>- !<span class="l">Sub &#34;arn:aws:s3:::${MyDataBucket}/*&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                  </span>- !<span class="l">Sub &#34;arn:aws:s3:::${MyDataBucket}&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">Condition</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                  </span><span class="nt">Bool</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                    </span><span class="nt">&#34;aws:SecureTransport&#34;: </span><span class="s2">&#34;false&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">Principal</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;*&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span>- <span class="nt">Sid</span><span class="p">:</span><span class="w"> </span><span class="l">Restrict Delete* Actions</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">Action</span><span class="p">:</span><span class="w"> </span><span class="l">s3:Delete*</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">Effect</span><span class="p">:</span><span class="w"> </span><span class="l">Deny</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">Principal</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;*&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">Resource</span><span class="p">:</span><span class="w"> </span>!<span class="l">Sub &#34;arn:aws:s3:::${MyDataBucket}/*&#34;</span><span class="w">
</span></span></span></code></pre></div><p>To do this with the AWS CLI in one command would require quite a few flags, and options, rather than including that here I will leave that exercise up to the reader.</p>
<p>Now some may say this is a great opportunity for consulting companies to endlessly uplift customer infrastructure. But this again begs the questions:</p>
<ol>
<li>Why is this the case for customers using the recommended tools?</li>
<li>What about developers getting started on their first application?</li>
<li>Wouldn&rsquo;t be better to have these consultants building something new, rather than crafting reams of YAML?</li>
</ol>
<h2 id="why-provide-resources-which-are-secure-by-default">Why Provide Resources which are Secure by Default?</h2>
<p>So I have used S3 buckets as a very common example, but there is an ever growing list of services in the AWS that I think would benefit from better default configuration.</p>
<p>Just to summarise some of the points I have made above:</p>
<ol>
<li>It would make it harder for those new to the cloud to do the wrong thing when following examples.</li>
<li>The cost of building and maintaining infrastructure would be reduced over time as safer defaults would remove the need for pages of code to deploy “secure” s3 buckets.</li>
<li>For new and busy developers things would be mostly right from the beginning, and likewise update that baseline even just for new applications, leaving them more time to do the actual work they should be doing.</li>
</ol>
<p>So anyone who is old enough to remember <a href="https://en.wikipedia.org/wiki/Solaris_%28operating_system%29">Sun Solaris</a> will recall the “secure by default” effort launched with Solaris 10 around 2005, this also came with “self healing” (stretch goal for AWS?), so security issues around defaults is not a new problem, but has been addressed before!</p>
<h2 id="follow-up-qa">Follow Up Q&amp;A</h2>
<p>I have added some of the questions I received while reviewing this article, with some answers I put together.</p>
<h4 id="will-cdk-help-with-this-problem-of-defaults">Will CDK help with this problem of defaults?</h4>
<p>So as it stands now I don&rsquo;t believe the default s3 bucket construct has any special default settings, there is certainly room for someone to make &ldquo;secure&rdquo; versions of the constructs but developers would need to search for them and that kind of misses the point of helping wider AWS user community.</p>
<h4 id="why-dont-you-just-write-your-own-cli-to-create-buckets">Why don&rsquo;t you just write your own CLI to create buckets?</h4>
<p>This is a good suggestion, however I already have my fair share of side projects, if I was to do this it would need to be championed by a orginisation, and team that got value from the effort. But again, needing to tell every new engineer to ignore the default AWS CLI as it isn&rsquo;t &ldquo;secure&rdquo; seems to be less than ideal, I really want everyone to be &ldquo;secure&rdquo;.</p>
<h4 id="how-did-you-come-up-with-this-topic">How did you come up with this topic?</h4>
<p>Well I am currently working through &ldquo;retrofitting&rdquo; best practices (the latest ones) on a bunch of aws serverless stacks which I helped build a year or so ago, this is when I asked the question why am I searching then helping to document what is &ldquo;baseline&rdquo; configuration for s3 buckets?!</p>
<h4 id="wont-this-make-the-tools-more-complicated-adding-all-these-best-practices">Won&rsquo;t this make the tools more complicated adding all these best practices?</h4>
<p>I think any uplift at all would be a bonus at the moment, I don&rsquo;t think it would be wise to take on every best practice out there, but surely the 80/20 rule would apply here. Anything to reduce the amount of retro fitting we need to do would be a good thing in my view.</p>
]]></content:encoded>
    </item>
    
  </channel>
</rss>
