Skip to content

Commit

Permalink
v1.0.0 Release (#248)
Browse files Browse the repository at this point in the history
* allow non-email usernames

* Cache the mediaconvert endpoint in order to avoid getting throttled on the DescribeEndpoints API.

* allow input text to be empty

* Add support for new languages in AWS Translate and Transcribe

* Add support for new languages in AWS Translate and Transcribe

* V0.1.6 bug fixes (#140)

* allow non-email usernames

* Cache the mediaconvert endpoint in order to avoid getting throttled on the DescribeEndpoints API.

* allow input text to be empty

* Add support for new languages in AWS Translate and Transcribe

* Add support for new languages in AWS Translate and Transcribe

* fix python 3.6 build errors and add support for python 3.8

* Fix markdown anchor for glossary

* add support to delete an asset from elasticsearch (#142)

* fix template validation error that happens when DeployAnalyticsPipeline=false but DeployDemoSite=true

* Mitigate XSS threats (#147)

* add subresource integrity (SRI) checksums so browsers can verify that the files they fetch are delivered without unexpected manipulation.

* move runtime configs from .env to /public/runtimeConfig.json

* webapp code cleanup

* webapp code cleanup

* Updated tests (#149)

This PR focuses on scoping IAM policies with least privalege. Along the way we have also improved the organization of build scripts and unit tests so they're easier to use.

Summary:
* Least privalege concerns were achieved by updating Cloud Formation templates to resolve issues reported by cfn_nag and viperlight

* We used to have many run_test.sh scripts to run unit tests. These have been consolidated into one script, tests/run_tests.sh, which you can run like this:
`echo "$REGION \n $MIE_STACK_NAME \n $MIE_USERNAME \n $MIE_PASSWORD" | ./tests/run_tests.sh`

Details:
* a pass at refactoring iam roles/policies

* refactor tests to use media in dataplane bucket, big test overhaul, small IAM changes for dataplane

* do not assume the user has put the region at the end of the bucket name

* Remove sam_translate from dataplaneapi and workflowapi.
Organize the code and output so it's easier to follow.
Access MIE Helper package from source/lib/ instead of /lib.

* Apply bash syntax optimizations

* Access MIE Helper package from source/lib/ instead of /lib.

* update lib path to mie helper

* remove redundant doc

* add stream encryption to fix cfn_nag warning

* remove sam-translate.py files

* remove old /webapp and /lib

* remove old /webapp and /lib

* rename license file per AWS guidelines

* rename notice file per AWS guidelines

* output misc debug info

* move tests/ into source/

Co-authored-by: Ian Downard <[email protected]>

* Add mediainfo and transcode operators (#150)

Resolved Issues:

#32
#138
#152
#151
#128
#153
#154
#156
#157

Summary of changes:

1. added proxy encode to mediaconvert job that generates thumbnails
2. added MediaInfo libraries to MIE lambda layer. Also published these layers in the Technical 
3. Marketing public S3 buckets.
4. added MediaInfo operator to MIE Complete Workflow and show mediainfo data in webui
5. major organization improvements in the build script
6. fixed minor webpack warnings
7. Added support for videos without spoken words
8. Added support for videos without any audio tracks
9. Added security measures to prevent users from uploading invalid media files

Details:
* Add mediainfo operator
* Add MediaInfo library to MIE lambda layer
* avoid webpack warnings about package size
* fix compile-time jquery warning
* remove unused requirements file
* minor code cleanup
* add log statement so we're consistent with other components
* show mediainfo data in analysis page
* explain how to enable hot-reload in dev mode
* Explain how to validate data in elasticsearch.
* Explain how to read/write metadata from one operator to another via workflow output objects.
* skip comprehend operators when transcript is empty
* skip comprehend operators when transcript is empty
* skip transcribe if video is silent
* use proxy encoded video for Rekognition operators
* recognize more image file types when determining what to use for thumbnail
* use a consistent print statement for logging the incoming lambda event object
* Now that we're supporting media formats besides mp4 and jpg, use a generic image or video media type. We can't assume "video/mp4" or "image/jpg" anymore.
* Remind developers that workflow attributes must be non-empty strings.
* Add transcode to mediaconvert job. Use that for the proxy encode input to downstream operators.
* Move transcribe operation from mediaconvert operator to thumbnail operator. The thumbnail operator now superseeds the old mediaconvert operator. We've disable old mediaconvert operator. After testing, we can remove the old mediaconvert operator.
* Avoid drawing boxes outside the dimensions of the video player.
* Thumbnail operator needs a check-status function now that it includes transcode. This commit adds that check-status function to the build script.
* minor edit, just to reorder packages to improve readability
* Move thumbnail operator to prelim stage so all mediaconvert outputs are ready before analysis operators begin.
* avoid showing undefined mediainfo attributes
* use free tier for elasticsearch domain
* change header title to AWS Content Analysis
* validate file types before upload
* build layer for python 3.8 runtime
* explain how to validate that the layer includes certain libraries
* add PointInTimeRecoveryEnabled and HTTP (non-ssl) Deny rule to dataplane bucket
* add versioning to S3 bucket
* validate file type before upload and enable Mediainfo for image workflow
* consolidate the code for checking image types
* use webpack's default devServer https option
* support all caps filenames
* remove input media from upload/ after copying it to private/assets/[asset_id]/input/
* if input file is not a valid media file then remove it from S3
* Get mediaconvert endpoint from cache if available
* Specify thumbnail as the first mediaconvert job so the thumbnail images become available as soon as possible. This lessens the likelihood of seeing broken thumbnail images in the webui.
* Add Mediainfo to Image workflow and allow Mediainfo to delete files from S3.
* minor edit to remove unnecessary whitespace
* minor edit to fix a 'key not found' exception that occurred when testing an empty workflow execution request (e.g. POST {} payload to /api/workflow/execution)
* Add Mediainfo to image workflow
* minor remove errand comma
* add CloudFormation string functions so we can use (lower case) stack name for mie website bucket
* fix bug in error messages for invalid file types
* fix yaml syntax errors
* fix invalid table query when invoking a GET on $WORKFLOW_API_ENDPOINT/workflow/execution/status/Error
* fix "key not found" error that occurs running workflows that include transcribe but not mediainfo
* 1) Update workflow configs and 2) upload media prior to every workflow execution because dataplane now deletes the uploaded media after copying it to private/assets/.
* upload media prior to workflow execution because dataplane now deletes the uploaded media after copying it to private/assets/.
* 1) Update workflow configs and 2) upload media prior to every workflow execution because dataplane now deletes the uploaded media after copying it to private/assets/.
* cleanup comments
* Use app.current_request.raw_body.decode instead of app.current_request.json_body in order to work around a bug in Chalice whereby it returns None for json_body. Reference: https://stackoverflow.com/questions/52789943/cannot-access-the-request-json-body-when-using-chalice
* append a unique id to image files uploaded to s3 so there are no conflicts between multiple threads running this concurrency test
* Handle the HTTP 409 and 500 errors that happen when tests don't clean up properly.

* add cost information

* minor edits

* minor edits

* minor edits

* minor edits

* fix bug detection silent videos

* bump up the python version

* bump up the python version

* Rek detect text in video support (#158)

* rek text detection functionality

* bug fixes for player markers and readdition of accidentally deleted code for text detection

* fix string operation to determine file type

* get input video from ProxyEncode (#168)

* get input video from ProxyEncode

* add new region support for Rekognition (#163)

* allow users to upload videos with formats supported by mediaconvert (#169)

* get input video from ProxyEncode

* add new region support for Rekognition

* allow users to upload videos with formats supported by mediaconvert (#164)

* allow users to upload videos with formats supported by mediaconvert

* Allow users to upload webm files.

* fix bug with determining key to proxy encode mp4

* fix bug with determining key to proxy encode mp4 (#170)

* get input video from ProxyEncode

* add new region support for Rekognition

* allow users to upload videos with formats supported by mediaconvert

* Allow users to upload webm files.

* fix bug with determining key to proxy encode mp4

* Disable versioning on dataplane bucket (#171)

* Disable versioning on dataplane bucket because so that bucket can be removed more easily

* minor edit

* support for rerunning analysis on an existing asset (#175)

* support for rerunning analysis on an existing asset

* bug fix in webapp code

* fix formatting issues after merge and update status to be polled by wf id

* Add gitter chat info (#182)

* Bumps [jquery](https://github.com/jquery/jquery) from 1.12.4 to 3.4.1.
- [Release notes](https://github.com/jquery/jquery/releases)
- [Commits](jquery/[email protected])

* add gitter channel info

* Bumps [jquery](https://github.com/jquery/jquery) from 1.12.4 to 3.4.1. (#181)

- [Release notes](https://github.com/jquery/jquery/releases)
- [Commits](jquery/[email protected])

* Fix mediainfo (#180)

* remove VersioningConfiguration on S3 bucket since that makes it much harder for AWS account owners to delete
 the bucket.

* MediaInfo version 19.09 works but 20.03 does not. Use to 19.09 instead of latest.

* update one-click deploy links for release version 0.1.7

* testing buildspec

* version bump python version in buildspec

* remove unneeded quotes from build command

* Change distribution bucket instructions (#189)

Previously, the instruction was to created a distribution bucket named $DIST_OUTPUT_BUCKET-$REGION, but now in `deployment/build-s3-dist.sh` it's expected to be just $DIST_OUTPUT_BUCKET.

* Init of build pipeline (#193)

* working build pipeline

* fix testing spec filename

* persist build user

* Add logo (#194)

The clapperboard, representing *multimedia*, is centered inside a crosshair, representing *under extreme scrutiny*. This symbol is available from [nounproject](https://thenounproject.com/icon/1815092/). The font is Engineering Plot, https://www.dafont.com/engineering-plot.font which conveys the scaffolding nature of MIE.

* Move my forked version of the isolated MIE backend into the main repo for collab (#196)

* init isolated mie framework

* removed unneeded email param

* add restapi ids to outputs

* add todo for cors

* Update README.md (#197)

Improve instructions in the README:
* fix references to old MediaInsightsEngine repository name 
* use docker port forwarding to enable developers to see the result of npm run serve on their local machine

* Update media-insights-stack.yaml (#198)

fix PolicyName typo

* Prevent duplication of this.entities (#201)

If user switches from Entities to KeyPhrases tab and back, this.entities doubles in size. 
To prevent this, we can employ the same method of clearing memory that is used in ComprehendKeyPhrases

* Avoid linking to step functions for queued workflows because that link will break since the step function doesn't exist yet. (#210)

* Added Cognito Identity Pool ID to the output of CF (#211)

Add IDENTITY_POOL_ID to stack outputs in order to make it easier for users to find the values they will need for the `webapp/public/runtimeConfig.json` file when trying to run the webapp locally on their laptop.

* change logo. The MIE team agreed to use the 3d black and white logo w… (#200)

* change logo. The MIE team agreed to use the 3d black and white logo without a slogan.

* move logo files to doc/images

* Update gui readme (#202)

* add instructions for creating new accounts for the GUI and remove out-of-date instructions for running the webapp.

* Add quantitative cost info

* fix type-o

* add cursor usage info

* document limitations

* update 3rd party licenses to include every package listed in package.json

* remove local dist and package files after build

* remove license file form MIE lambda helper. This was left over from when the lambda helper used to be in its own repo

* Remove reference to old MediaInsightsEngineLambdaHelper repo. It used to be managed in a different repo but now it's part of this repo.

* Video segment detection / v0.1.8 one click links (#215)

* working segment detection v1

* working segment detection w/ api changes

* added end scene pause functionality and pagination to scene detection tables

* fix webapp deploy bug

* reformat readme for simpler installation

* updated readme with instructions for installtion

* remove values from runtimeConfig and set sriplugin to true

* Added .vscode to gitignore

* Updated MIE_ACCESS_TOKEN to retreive token from correct path in the export statement under IMPLEMENTATION_GUIDE. Updated MieDataplaneApiHandlerRolePolicy to include ListBucket on the Dataplane bucket to ensure S3 NoSuchKey Error message is given instead of AccessDenied when accessing a missing S3 key.

* Added ListBucket to Dataplane bucket policy for better debugging and minor documentation correction (#235)

Updated the Dataplane API Handler's Role policy to include a ListBucket action on the Dataplane S3 bucket. This is done so that the developer gets a NoSuchKey error when accessing an invalid S3 key instead of getting AccessDenied. The incorrect message makes it hard to debug especially when all required permissions for execution of the Lambda exist.

Updated the path under Implementation guide to reflect the correct path when exporting the MIE_ACCESS_TOKEN. Currently: $MIE_DEVELOPMENT_HOME/tests/getAccessToken.py.
Proposed change: $MIE_DEVELOPMENT_HOME/source/tests/getAccessToken.py

Added .vscode/ to gitignore as a QOL improvement for VSCode users.

* more updates to backend mie stack

* add addl exports to template

* fix #236

* adjust template url location

* remove missed merge conflict text from stack template

* remove commented-out code

* use virtual-hosted style s3 paths

* use virtual-hosted style s3 paths

* Update documentation to reflect the new MIE backend

* fix s3 copy error

* Update installation instructions.
Move implementation guide to Media Insights front-end repo.

* adjust images

* adjust images

* minor update

* minor update

* minor update

* minor update

* minor update

* remove changes to gitignore and IMPLEMENTATION_GUIDE.md

* Remove a modified file from pull request

* Remove a modified file from pull request

* Remove a modified file from pull request

* fix #206 (#246)

* minor update

* minor update

* change rodeolabz s3 folder

* change rodeolabz s3 folder

* minor update

* change rodeolabz s3 folder

* change rodeolabz s3 folder

* add postman screenshot

* update version used in the one-click deploy links

Co-authored-by: brand161 <[email protected]>
Co-authored-by: Brandon Dold <[email protected]>
Co-authored-by: brandold <[email protected]>
Co-authored-by: Tulio Casagrande <[email protected]>
Co-authored-by: Anton <[email protected]>
Co-authored-by: Rajesh <[email protected]>
  • Loading branch information
7 people authored Oct 7, 2020
1 parent a663d95 commit 49df487
Show file tree
Hide file tree
Showing 74 changed files with 185 additions and 10,810 deletions.
1,370 changes: 0 additions & 1,370 deletions IMPLEMENTATION_GUIDE.md

This file was deleted.

185 changes: 62 additions & 123 deletions README.md

Large diffs are not rendered by default.

147 changes: 32 additions & 115 deletions deployment/build-s3-dist.sh
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ bucket=$1
version=$2
region=$3
if [ -n "$4" ]; then profile=$4; fi
s3domain="s3.$region.amazonaws.com"

# Check if region is supported:
if [ "$region" != "us-east-1" ] &&
Expand All @@ -44,7 +45,7 @@ if [ "$region" != "us-east-1" ] &&
[ "$region" != "ap-southeast-2" ] &&
[ "$region" != "ap-northeast-1" ] &&
[ "$region" != "ap-northeast-2" ]; then
echo "ERROR. Rekognition operatorions are not supported in region $region"
echo "ERROR. Rekognition operations are not supported in region $region"
exit 1
fi

Expand All @@ -66,8 +67,6 @@ fi
template_dir="$PWD"
dist_dir="$template_dir/dist"
source_dir="$template_dir/../source"
workflows_dir="$template_dir/../source/workflows"
webapp_dir="$template_dir/../source/webapp"
echo "template_dir: ${template_dir}"

# Create and activate a temporary Python environment for this script.
Expand Down Expand Up @@ -160,10 +159,6 @@ if [ $? -eq 0 ]; then
echo "Lambda layer build script completed.";
else
echo "WARNING: Lambda layer build script failed. We'll use a pre-built Lambda layers instead.";
s3domain="s3-$region.amazonaws.com"
if [ "$region" = "us-east-1" ]; then
s3domain="s3.amazonaws.com"
fi
echo "Downloading https://rodeolabz-$region.$s3domain/media_insights_engine/media_insights_engine_lambda_layer_python3.6.zip"
wget -q https://rodeolabz-"$region"."$s3domain"/media_insights_engine/media_insights_engine_lambda_layer_python3.6.zip
echo "Downloading https://rodeolabz-$region.$s3domain/media_insights_engine/media_insights_engine_lambda_layer_python3.7.zip"
Expand All @@ -183,21 +178,11 @@ echo "CloudFormation Templates"
echo "------------------------------------------------------------------------------"

echo "Preparing template files:"
cp "$workflows_dir/instant_translate.yaml" "$dist_dir/instant_translate.template"
cp "$workflows_dir/transcribe.yaml" "$dist_dir/transcribe.template"
cp "$workflows_dir/rekognition.yaml" "$dist_dir/rekognition.template"
cp "$workflows_dir/comprehend.yaml" "$dist_dir/comprehend.template"
cp "$workflows_dir/MieCompleteWorkflow.yaml" "$dist_dir/MieCompleteWorkflow.template"
cp "$source_dir/operators/operator-library.yaml" "$dist_dir/media-insights-operator-library.template"
cp "$template_dir/media-insights-stack.yaml" "$dist_dir/media-insights-stack.template"
cp "$template_dir/string.yaml" "$dist_dir/string.template"
cp "$template_dir/media-insights-test-operations-stack.yaml" "$dist_dir/media-insights-test-operations-stack.template"
cp "$template_dir/media-insights-dataplane-streaming-stack.template" "$dist_dir/media-insights-dataplane-streaming-stack.template"
cp "$workflows_dir/rekognition.yaml" "$dist_dir/rekognition.template"
cp "$workflows_dir/MieCompleteWorkflow.yaml" "$dist_dir/MieCompleteWorkflow.template"
cp "$source_dir/consumers/elastic/media-insights-elasticsearch.yaml" "$dist_dir/media-insights-elasticsearch.template"
cp "$source_dir/consumers/elastic/media-insights-elasticsearch.yaml" "$dist_dir/media-insights-s3.template"
cp "$webapp_dir/media-insights-webapp.yaml" "$dist_dir/media-insights-webapp.template"
find "$dist_dir"
echo "Updating code source bucket in template files with '$bucket'"
echo "Updating solution version in template files with '$version'"
Expand All @@ -212,12 +197,6 @@ sed -i.orig -e "$new_bucket" "$dist_dir/media-insights-test-operations-stack.tem
sed -i.orig -e "$new_version" "$dist_dir/media-insights-test-operations-stack.template"
sed -i.orig -e "$new_bucket" "$dist_dir/media-insights-dataplane-streaming-stack.template"
sed -i.orig -e "$new_version" "$dist_dir/media-insights-dataplane-streaming-stack.template"
sed -i.orig -e "$new_bucket" "$dist_dir/media-insights-elasticsearch.template"
sed -i.orig -e "$new_version" "$dist_dir/media-insights-elasticsearch.template"
sed -i.orig -e "$new_bucket" "$dist_dir/media-insights-s3.template"
sed -i.orig -e "$new_version" "$dist_dir/media-insights-s3.template"
sed -i.orig -e "$new_bucket" "$dist_dir/media-insights-webapp.template"
sed -i.orig -e "$new_version" "$dist_dir/media-insights-webapp.template"

echo "------------------------------------------------------------------------------"
echo "Operators"
Expand All @@ -229,7 +208,7 @@ echo "--------------------------------------------------------------------------

echo "Building 'operator failed' function"
cd "$source_dir/operators/operator_failed" || exit 1
[ -e dist ] && rm -r dist
[ -e dist ] && rm -rf dist
mkdir -p dist
zip -q dist/operator_failed.zip operator_failed.py
cp "./dist/operator_failed.zip" "$dist_dir/operator_failed.zip"
Expand All @@ -242,7 +221,7 @@ rm -rf ./dist
echo "Building Mediainfo function"
cd "$source_dir/operators/mediainfo" || exit 1
# Make lambda package
[ -e dist ] && rm -r dist
[ -e dist ] && rm -rf dist
mkdir -p dist
# Add the app code to the dist zip.
zip -q dist/mediainfo.zip mediainfo.py
Expand All @@ -256,7 +235,7 @@ rm -rf ./dist

echo "Building Media Convert function"
cd "$source_dir/operators/mediaconvert" || exit 1
[ -e dist ] && rm -r dist
[ -e dist ] && rm -rf dist
mkdir -p dist
zip -q dist/start_media_convert.zip start_media_convert.py
zip -q dist/get_media_convert.zip get_media_convert.py
Expand All @@ -271,7 +250,7 @@ rm -rf ./dist
echo "Building Thumbnail function"
cd "$source_dir/operators/thumbnail" || exit 1
# Make lambda package
[ -e dist ] && rm -r dist
[ -e dist ] && rm -rf dist
mkdir -p dist
if ! [ -d ./dist/start_thumbnail.zip ]; then
zip -q -r9 ./dist/start_thumbnail.zip .
Expand All @@ -296,7 +275,7 @@ rm -rf ./dist

echo "Building Transcribe functions"
cd "$source_dir/operators/transcribe" || exit 1
[ -e dist ] && rm -r dist
[ -e dist ] && rm -rf dist
mkdir -p dist
zip -q -g ./dist/start_transcribe.zip ./start_transcribe.py
zip -q -g ./dist/get_transcribe.zip ./get_transcribe.py
Expand All @@ -310,7 +289,7 @@ rm -rf ./dist

echo "Building Stage completion function"
cd "$source_dir/operators/captions" || exit
[ -e dist ] && rm -r dist
[ -e dist ] && rm -rf dist
mkdir -p dist
zip -g ./dist/get_captions.zip ./get_captions.py
cp "./dist/get_captions.zip" "$dist_dir/get_captions.zip"
Expand All @@ -322,9 +301,9 @@ rm -rf ./dist

echo "Building Translate function"
cd "$source_dir/operators/translate" || exit 1
[ -e dist ] && rm -r dist
[ -e dist ] && rm -rf dist
mkdir -p dist
[ -e package ] && rm -r package
[ -e package ] && rm -rf package
mkdir -p package
echo "create requirements for lambda"
# Make lambda package
Expand Down Expand Up @@ -361,7 +340,7 @@ rm -rf ./dist ./package

echo "Building Polly function"
cd "$source_dir/operators/polly" || exit 1
[ -e dist ] && rm -r dist
[ -e dist ] && rm -rf dist
mkdir -p dist
zip -q -g ./dist/start_polly.zip ./start_polly.py
zip -q -g ./dist/get_polly.zip ./get_polly.py
Expand All @@ -376,8 +355,8 @@ rm -rf ./dist
echo "Building Comprehend function"
cd "$source_dir/operators/comprehend" || exit 1

[ -e dist ] && rm -r dist
[ -e package ] && rm -r package
[ -e dist ] && rm -rf dist
[ -e package ] && rm -rf package
for dir in ./*;
do
echo "$dir"
Expand Down Expand Up @@ -460,7 +439,7 @@ zip -q -r9 check_text_detection_status.zip check_text_detection_status.py

# remove this when service is GA

[ -e dist ] && rm -r dist
[ -e dist ] && rm -rf dist
mkdir -p dist
cd dist
cp ../start_technical_cue_detection.py .
Expand All @@ -474,7 +453,7 @@ zip -q -r9 ../start_technical_cue_detection.zip *
cd ../


[ -e dist ] && rm -r dist
[ -e dist ] && rm -rf dist
mkdir -p dist
cd dist
cp ../check_technical_cue_status.py .
Expand All @@ -490,7 +469,7 @@ cd ../
mv -f ./*.zip "$dist_dir"


[ -e dist ] && rm -r dist
[ -e dist ] && rm -rf dist
mkdir -p dist
cd dist
cp ../start_shot_detection.py .
Expand All @@ -503,7 +482,7 @@ cd ../../
zip -q -r9 ../start_shot_detection.zip *
cd ../

[ -e dist ] && rm -r dist
[ -e dist ] && rm -rf dist
mkdir -p dist
cd dist
cp ../check_shot_detection_status.py .
Expand All @@ -525,7 +504,7 @@ mv -f ./*.zip "$dist_dir"

echo "Building test operators"
cd "$source_dir/operators/test" || exit
[ -e dist ] && rm -r dist
[ -e dist ] && rm -rf dist
mkdir -p dist
zip -q -g ./dist/test_operations.zip ./test.py
cp "./dist/test_operations.zip" "$dist_dir/test_operations.zip"
Expand All @@ -537,9 +516,9 @@ echo "--------------------------------------------------------------------------

echo "Building DDB Stream function"
cd "$source_dir/dataplanestream" || exit 1
[ -e dist ] && rm -r dist
[ -e dist ] && rm -rf dist
mkdir -p dist
[ -e package ] && rm -r package
[ -e package ] && rm -rf package
mkdir -p package
echo "preparing packages from requirements.txt"
# Package dependencies listed in requirements.txt
Expand All @@ -565,50 +544,16 @@ zip -q -g dist/ddbstream.zip ./*.py
cp "./dist/ddbstream.zip" "$dist_dir/ddbstream.zip"
rm -rf ./dist ./package

echo "------------------------------------------------------------------------------"
echo "Elasticsearch consumer Function"
echo "------------------------------------------------------------------------------"

echo "Building Elasticsearch Consumer function"
cd "$source_dir/consumers/elastic" || exit 1

[ -e dist ] && rm -r dist
mkdir -p dist
[ -e package ] && rm -r package
mkdir -p package
echo "preparing packages from requirements.txt"
# Package dependencies listed in requirements.txt
pushd package || exit 1
# Handle distutils install errors with setup.cfg
touch ./setup.cfg
echo "[install]" > ./setup.cfg
echo "prefix= " >> ./setup.cfg
# Try and handle failure if pip version mismatch
if [ -x "$(command -v pip)" ]; then
pip install --quiet -r ../requirements.txt --target .
elif [ -x "$(command -v pip3)" ]; then
echo "pip not found, trying with pip3"
pip3 install --quiet -r ../requirements.txt --target .
elif ! [ -x "$(command -v pip)" ] && ! [ -x "$(command -v pip3)" ]; then
echo "No version of pip installed. This script requires pip. Cleaning up and exiting."
exit 1
fi
zip -q -r9 ../dist/esconsumer.zip .
popd || exit 1

zip -q -g dist/esconsumer.zip ./*.py
cp "./dist/esconsumer.zip" "$dist_dir/esconsumer.zip"
rm -f ./dist ./package

echo "------------------------------------------------------------------------------"
echo "Workflow Scheduler"
echo "------------------------------------------------------------------------------"

echo "Building Workflow scheduler"
cd "$source_dir/workflow" || exit 1
[ -e dist ] && rm -r dist
[ -e dist ] && rm -rf dist
mkdir -p dist
[ -e package ] && rm -r package
[ -e package ] && rm -rf package
mkdir -p package
echo "preparing packages from requirements.txt"
# Package dependencies listed in requirements.txt
Expand All @@ -633,15 +578,15 @@ zip -q -r9 ../dist/workflow.zip .
cd ..
zip -q -g dist/workflow.zip ./*.py
cp "./dist/workflow.zip" "$dist_dir/workflow.zip"
rm -f ./dist ./package/
rm -rf ./dist ./package/

echo "------------------------------------------------------------------------------"
echo "Workflow API Stack"
echo "------------------------------------------------------------------------------"

echo "Building Workflow Lambda function"
cd "$source_dir/workflowapi" || exit 1
[ -e dist ] && rm -r dist
[ -e dist ] && rm -rf dist
mkdir -p dist
if ! [ -x "$(command -v chalice)" ]; then
echo 'Chalice is not installed. It is required for this solution. Exiting.'
Expand All @@ -662,15 +607,15 @@ if [ $? -ne 0 ]; then
echo "ERROR: Failed to build workflow api template"
exit 1
fi
rm -f ./dist
rm -rf ./dist

echo "------------------------------------------------------------------------------"
echo "Dataplane API Stack"
echo "------------------------------------------------------------------------------"

echo "Building Dataplane Stack"
cd "$source_dir/dataplaneapi" || exit 1
[ -e dist ] && rm -r dist
[ -e dist ] && rm -rf dist
mkdir -p dist
if ! [ -x "$(command -v chalice)" ]; then
echo 'Chalice is not installed. It is required for this solution. Exiting.'
Expand All @@ -689,26 +634,8 @@ if [ $? -ne 0 ]; then
echo "ERROR: Failed to build dataplane api template"
exit 1
fi
rm -f ./dist

echo "------------------------------------------------------------------------------"
echo "Demo website stack"
echo "------------------------------------------------------------------------------"

echo "Building website helper function"
cd "$webapp_dir/helper" || exit 1
[ -e dist ] && rm -r dist
mkdir -p dist
zip -q -g ./dist/websitehelper.zip ./website_helper.py
cp "./dist/websitehelper.zip" "$dist_dir/websitehelper.zip"
rm -rf ./dist

echo "Building Vue.js website"
cd "$webapp_dir/" || exit 1
echo "Installing node dependencies"
npm install
echo "Compiling the vue app"
npm run build
echo "Built demo webapp"

echo "------------------------------------------------------------------------------"
echo "Copy dist to S3"
Expand All @@ -718,25 +645,19 @@ echo "Copying the prepared distribution to S3..."
for file in "$dist_dir"/*.zip
do
if [ -n "$profile" ]; then
aws s3 cp "$file" s3://"$bucket"/media-insights-solution/"$version"/code/ --profile "$profile"
aws s3 cp "$file" s3://"$bucket"/media_insights_engine/"$version"/code/ --profile "$profile"
else
aws s3 cp "$file" s3://"$bucket"/media-insights-solution/"$version"/code/
aws s3 cp "$file" s3://"$bucket"/media_insights_engine/"$version"/code/
fi
done
for file in "$dist_dir"/*.template
do
if [ -n "$profile" ]; then
aws s3 cp "$file" s3://"$bucket"/media-insights-solution/"$version"/cf/ --profile "$profile"
aws s3 cp "$file" s3://"$bucket"/media_insights_engine/"$version"/cf/ --profile "$profile"
else
aws s3 cp "$file" s3://"$bucket"/media-insights-solution/"$version"/cf/
aws s3 cp "$file" s3://"$bucket"/media_insights_engine/"$version"/cf/
fi
done
echo "Uploading the MIE web app..."
if [ -n "$profile" ]; then
aws s3 cp "$webapp_dir"/dist s3://"$bucket"/media-insights-solution/"$version"/code/website --recursive --profile "$profile"
else
aws s3 cp "$webapp_dir"/dist s3://"$bucket"/media-insights-solution/"$version"/code/website --recursive
fi

echo "------------------------------------------------------------------------------"
echo "S3 packaging complete"
Expand All @@ -752,11 +673,7 @@ echo "--------------------------------------------------------------------------

echo ""
echo "Template to deploy:"
if [ "$region" == "us-east-1" ]; then
echo https://"$bucket".s3.amazonaws.com/media-insights-solution/"$version"/cf/media-insights-stack.template
else
echo https://"$bucket".s3."$region".amazonaws.com/media-insights-solution/"$version"/cf/media-insights-stack.template
fi
echo https://"$bucket"."$s3domain"/media_insights_engine/"$version"/cf/media-insights-stack.template

echo "------------------------------------------------------------------------------"
echo "Done"
Expand Down
Loading

0 comments on commit 49df487

Please sign in to comment.