How to create a codepipeline to build jar file from java code stored at github and deploy it to lambda function?

[亡魂溺海] 提交于 2021-02-10 14:11:14

问题


I want to build a codepipeline that will get the code(java) from github build a jar file and deploy it to aws lamda(or store the jar in a specific S3 bucket). I only want to use tools provided by AWS platform only.

If I am using only Codebuild I am able to build jar from the github code and store it to S3(https://docs.aws.amazon.com/codebuild/latest/userguide/getting-started.html) and I am using a deployer lamda function to deploy the code to my service lamda. Whenever there is any change in the S3 bucket deployer lamda gets triggred.

DrawBack: Problem with this is I have to run codebuild manually everytime after commiting changes to github. I want this codebuild to detect changes automatically from github.

To solve the above issue I have made a code pipeline which detect code changes using github webhooks but here it is creating zip file instead of jar

So what I am actually trying is:

GitHub(changes)--->codebuild-->store jar file to specific S3 bucket with specific name or deploy to lambda

buildspec.yml

    version: 0.2

    phases:
  build:
    commands:
      - echo Build started on `date`
      - mvn test
  post_build:
    commands:
      - echo Build completed on `date`
      - mvn package
artifacts:
  files:
    - target/testfunction-1.0.0-jar-with-dependencies.jar

回答1:


CodePipeline artifact locations are different for each pipeline execution so they're isolated.

I think what you'll want to do is produce a JAR file in CodeBuild, which will end up in a CodePipeline artifact with a ZIP format. You can add a second CodeBuild action that accepts the output of the first CodeBuild action (the CodeBuild action will unzip the input artifact for you) and deploys to S3 (this is pretty trivial to script with the the AWS CLI).

It's entirely possible to combine both CodeBuild actions, but I like to keep the "build" and "deploy" steps separate.




回答2:


First off CodeDeploy is baffling when it comes to setting up a simple pipeline to update the lambda when a GitHub commit happens. It shouldn't be this hard. We created the following Lambda function that can process the CodePipeline job build artifact (ZIP) and push the JAR update to Lambda using updateFunctionCode.

import com.amazonaws.services.codepipeline.AWSCodePipeline;
import com.amazonaws.services.codepipeline.AWSCodePipelineClientBuilder;
import com.amazonaws.services.codepipeline.model.FailureDetails;
import com.amazonaws.services.codepipeline.model.PutJobFailureResultRequest;
import com.amazonaws.services.codepipeline.model.PutJobSuccessResultRequest;
import com.amazonaws.services.lambda.AWSLambda;
import com.amazonaws.services.lambda.AWSLambdaClientBuilder;
import com.amazonaws.services.lambda.model.UpdateFunctionCodeRequest;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.S3Object;
import org.json.JSONObject;

import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.util.zip.ZipEntry;
import java.util.zip.ZipInputStream;

/**
 * Created by jonathan and josh on 1/22/2019.
 * <p>
 * Process Code Pipeline Job
 */
@SuppressWarnings("unused")
public class CodePipelineLambdaUpdater {
  private static AWSCodePipeline codepipeline = null;
  private static AmazonS3 s3 = null;
  private static AWSLambda lambda = null;

  @SuppressWarnings("UnusedParameters")
  public void handler(InputStream inputStream, OutputStream outputStream, Context context) throws IOException {
    // Read the the job JSON object
    String json = new String(readStreamToByteArray(inputStream), "UTF-8");
    JSONObject eventJsonObject = new JSONObject(json);

    // Extract the jobId first
    JSONObject codePiplineJobJsonObject = eventJsonObject.getJSONObject("CodePipeline.job");
    String jobId = codePiplineJobJsonObject.getString("id");

    // Initialize the code pipeline client if necessary
    if (codepipeline == null) {
      codepipeline = AWSCodePipelineClientBuilder.defaultClient();
    }
    if (s3 == null) {
      s3 = AmazonS3ClientBuilder.defaultClient();
    }
    if (lambda == null) {
      lambda = AWSLambdaClientBuilder.defaultClient();
    }

    try {
      // The bucketName and objectKey refer to the intermediate ZIP file produced by CodePipeline
      String bucketName = codePiplineJobJsonObject.getJSONObject("data").getJSONArray("inputArtifacts").getJSONObject(0).getJSONObject("location").getJSONObject("s3Location").getString("bucketName");
      String objectKey = codePiplineJobJsonObject.getJSONObject("data").getJSONArray("inputArtifacts").getJSONObject(0).getJSONObject("location").getJSONObject("s3Location").getString("objectKey");
      // The user parameter is the Lambda function name that we want to update.  This is configured when adding the CodePipeline Action
      String functionName = codePiplineJobJsonObject.getJSONObject("data").getJSONObject("actionConfiguration").getJSONObject("configuration").getString("UserParameters");

      System.out.println("bucketName: " + bucketName);
      System.out.println("objectKey: " + objectKey);
      System.out.println("functionName: " + functionName);

      // Download the object
      S3Object s3Object = s3.getObject(new GetObjectRequest(bucketName, objectKey));

      // Read the JAR out of the ZIP file.  Should be the only file for our Java code
      ZipInputStream zis = new ZipInputStream(s3Object.getObjectContent());
      ZipEntry zipEntry;
      byte[] data = null;
      //noinspection LoopStatementThatDoesntLoop
      while ((zipEntry = zis.getNextEntry()) != null) {
        if (zipEntry.getName().endsWith(".jar")) {
          System.out.println("zip file: " + zipEntry.getName());
          data = readStreamToByteArray(zis);
          System.out.println("Length: " + data.length);
          break;
        }
      }

      // If we have data then update the function
      if (data != null) {
        // Update the lambda function
        UpdateFunctionCodeRequest updateFunctionCodeRequest = new UpdateFunctionCodeRequest();
        updateFunctionCodeRequest.setFunctionName(functionName);
        updateFunctionCodeRequest.setPublish(true);
        updateFunctionCodeRequest.setZipFile(ByteBuffer.wrap(data));
        lambda.updateFunctionCode(updateFunctionCodeRequest);
        System.out.println("Updated function: " + functionName);

        // Indicate success
        PutJobSuccessResultRequest putJobSuccessResultRequest = new PutJobSuccessResultRequest();
        putJobSuccessResultRequest.setJobId(jobId);
        codepipeline.putJobSuccessResult(putJobSuccessResultRequest);
      } else {
        // Failre the job
        PutJobFailureResultRequest putJobFailureResultRequest = new PutJobFailureResultRequest();
        putJobFailureResultRequest.setJobId(jobId);
        FailureDetails failureDetails = new FailureDetails();
        failureDetails.setMessage("No data available to update function with.");
        putJobFailureResultRequest.setFailureDetails(failureDetails);
        codepipeline.putJobFailureResult(putJobFailureResultRequest);
      }

      System.out.println("Finished");
    } catch (Throwable e) {
      // Handle all other exceptions
      System.out.println("Well that ended badly...");
      e.printStackTrace();

      PutJobFailureResultRequest putJobFailureResultRequest = new PutJobFailureResultRequest();
      putJobFailureResultRequest.setJobId(jobId);
      FailureDetails failureDetails = new FailureDetails();
      failureDetails.setMessage("Failed with error: " + e.getMessage());
      putJobFailureResultRequest.setFailureDetails(failureDetails);
      codepipeline.putJobFailureResult(putJobFailureResultRequest);
    }
  }

  private static void copy(InputStream in, OutputStream out) throws IOException {
    byte[] buffer = new byte[100000];


    for (; ; ) {
      int rc = in.read(buffer);
      if (rc == -1) break;
      out.write(buffer, 0, rc);
    }

    out.flush();
  }

  private static byte[] readStreamToByteArray(InputStream in) throws IOException {
    ByteArrayOutputStream baos = new ByteArrayOutputStream();

    try {
      copy(in, baos);
    } finally {
      safeClose(in);
    }

    return baos.toByteArray();
  }

  private static InputStream safeClose(InputStream in) {
    try {
      if (in != null) in.close();
    } catch (Throwable ignored) {
    }
    return null;
  }
}

This is the project Maven file.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.yourcompany</groupId>
    <artifactId>codepipeline-lambda-updater</artifactId>
    <version>1.0-SNAPSHOT</version>

    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>com.amazonaws</groupId>
                <artifactId>aws-java-sdk-bom</artifactId>
                <version>1.11.487</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>

    <dependencies>
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-lambda-java-core</artifactId>
            <version>1.1.0</version>
        </dependency>
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-java-sdk-lambda</artifactId>
        </dependency>
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-java-sdk-core</artifactId>
        </dependency>
        <!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-s3 -->
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-java-sdk-s3</artifactId>
            <version>1.11.487</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-codepipeline -->
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-java-sdk-codepipeline</artifactId>
            <version>1.11.487</version>
        </dependency>

        <dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-api</artifactId>
            <version>2.10.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-core</artifactId>
            <version>2.10.0</version>
        </dependency>
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-lambda-java-log4j2</artifactId>
            <version>1.0.0</version>
        </dependency>
        <dependency>
            <groupId>org.jetbrains</groupId>
            <artifactId>annotations</artifactId>
            <version>15.0</version>
        </dependency>
        <!--<dependency>-->
            <!--<groupId>com.google.code.gson</groupId>-->
            <!--<artifactId>gson</artifactId>-->
            <!--<version>2.8.2</version>-->
        <!--</dependency>-->
        <!-- https://mvnrepository.com/artifact/org.json/json -->
        <dependency>
            <groupId>org.json</groupId>
            <artifactId>json</artifactId>
            <version>20180813</version>
        </dependency>
        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-lang3</artifactId>
            <version>3.1</version>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>2.4.3</version>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                        <configuration>
                            <transformers>
                                <transformer
                                        implementation="com.github.edwgiz.mavenShadePlugin.log4j2CacheTransformer.PluginsCacheFileTransformer">
                                </transformer>
                            </transformers>
                        </configuration>
                    </execution>
                </executions>
                <dependencies>
                    <dependency>
                        <groupId>com.github.edwgiz</groupId>
                        <artifactId>maven-shade-plugin.log4j2-cachefile-transformer</artifactId>
                        <version>2.8.1</version>
                    </dependency>
                </dependencies>
            </plugin>
        </plugins>
    </build>

</project>

This baseline should get you started. Embellish the code to do fancier deployments using further SDK calls as you see fit.



来源:https://stackoverflow.com/questions/53984174/how-to-create-a-codepipeline-to-build-jar-file-from-java-code-stored-at-github-a

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!