Lately, I've been experimenting with AWS S3, EC2, and Lambda and the process is pretty great. That said, I don't feel comfortable wasting compute time by spinning up an S3 bucket or EC2 instance when I can first write my code on a local machine. Plus, the second benefit is that I can also work offline while traveling.
SIMULATING AN S3 BUCKET ON YOUR LOCAL MACHINE
Fake S3 allows you to run an S3 bucket on your local machine. As of right now, it's only available as a Ruby gem.
sudo gem install fakes3
Once you install the Ruby gem, the next step is instantiate a web server on port [4567]. The --limit parameter artificially creates a data cap in order to simulate a mobile phone connection.
sudo fakes3 -r /mnt/fakes3_root -p 4567 --limit=50K
Thats it!
Writing to S3, from S3 and a few other DevOps-type requests
After installing S3, the next step was to test it out. The script below allows an administrator to:
- Creates an S3 Bucket
- Create 26 files from A-Z with the words "Hello World" inside the bucket.
- Iterate through each file in the bucket (26 times), create a JSON file and publish it to the
/uploads
folder. - Copy the entire
/uploads
folder to a new folder named/ec2
. - Delete the bucket.
require 'rubygems'
#This is the AWS SDK
require 'aws/s3'
#This is the tool that lets do you bash-like commands
require 'fileutils'
#This is simply for demonstration purposes
require 'time'
#This is simply for the demo
require 'json'
#This is just a class that focuses on creating functions that are based on bash commands
class CopyUtil
APP_ROOT = File.dirname(__FILE__)
OUTPUT_DIR = "uploads"
EC2_DIR = "ec2"
#A. This is the first command fired when the class is started
def initialize
create_output_directory
end
#B. Create a directory
def create_output_directory
#
FileUtils.rm_rf(EC2_DIR)
FileUtils.rm_rf(OUTPUT_DIR)
#Create a /JSON directory
Dir.mkdir(OUTPUT_DIR)
#Make it platform independent
$:.unshift(File.join(APP_ROOT, OUTPUT_DIR))
end
#Create a blank file
def create_file(file_name, file_content)
@file_type = ".json"
@mode = "w"
@output = File.new(OUTPUT_DIR + "/" + "#{file_name}" + @file_type, @mode)
@output.puts file_content
end
#C. Copy the directory
def copy_files
FileUtils.cp_r(OUTPUT_DIR + "/.", EC2_DIR)
end
end
#This class is simply design for demo
class JSONUtil
def create(key, value)
{ "#{key}" => "#{value}:#{get_timestamp}" }.to_json
end
def get_timestamp
Time.now.utc.iso8601
end
end
#include a library
include AWS::S3
#Create an S3 connection
AWS::S3::Base.establish_connection!(:access_key_id => "123",
:secret_access_key => "abc",
:server => "localhost",
:port => "4567")
# Get the name of the bucket
my_first_bucket = 'myFirstBucket'
#Create the bucket
Bucket.create(my_first_bucket)
#Go from A - Z and store crap in the bucket
('a'..'z').each do |filename|
S3Object.store(filename, 'Hello World', my_first_bucket)
end
#Create a new tool that will make files and copy them over
copy_util = CopyUtil.new()
#Print out the contents of the bucket
bucket = Bucket.find(my_first_bucket)
#Iterate through each item in the bucket and create a json file
bucket.objects.each do |s3_obj|
@key = "#{s3_obj.key}"
@value = "#{s3_obj.value}"
#Print this out on terminal
puts @key + ":" + @value
#Create a JSON file
@json = JSONUtil.new.create(@key, @value)
#Copy the contents of this bucket into files on the desktop
copy_util.create_file(@key, @json)
end
#Copy over the entire directory
copy_util.copy_files
# Delete your bucket and all its keys
Bucket.delete(my_first_bucket,:force => true)
I wrote this as a single file so that I can later be ported to a single function within AWS Lambda.
The next step is to port this over to NodeJS (or Python) so that I can use this as an AWS Lambda function.
If I'm porting this over why did I write this in Ruby? Because I love Ruby!