HPC In The Cloud

Cloud Pricing

Both Amazon Web Services (EC2) and Windows Azure offer comprehensive HPC platforms available at hourly rates.  Amazon offers a flexible line of Compute instances (in this case all priced with the Windows operating system):

Cluster Compute Instances  
Quadruple Extra Large $1.610 per Hour
Eight Extra Large $2.970 per Hour
Cluster GPU Instances  
Quadruple Extra Large $2.600 per Hour
High-I/O On-Demand Instances
Quadruple Extra Large $3.580 per Hour

As can be seen high I/O equipment does carry a premium.

Windows Azure has an equivalent to the quadruple extra-large instance for approximately $0.96 per Hour.  Windows Azure does have other costs such as storage and bandwidth, but there is a full price calculator available at: http://www.windowsazure.com/en-us/pricing/calculator/.

Importantly with the cloud one instance for 100 hours costs the same as 100 instances for one hour, enabling the burst use of HPC to be both affordable and available.

Statistics from Moving Average Crossover with Cloud based GPU

A moving average is just the trailing average of price over a given period of price points (which can be days, hours, minutes, or even seconds).  This chart of the US Dollar and Japanese Yen shows where two crossovers would indicate trade opportunities that occurred earlier this year as the faster (i.e. shorter/red line– 12 ) and slower (i.e. longer/green line – 25) averages cross each other.

In such an algorithm there are two fundamental decisions to be made:

  • Which Moving Average indicator or use
  • What time period parameter to use for each moving average

For our trivial example we used four moving averages

  • Simple Moving Average
  • Exponential Moving Average
  • Volume Weighted Moving Average
  • Typical Moving Average

Any single trading model of this type is made up of two of these averages each with a time period (the number of prices in the moving average).  The trade signals are simply the points at which the two cross each other.

Number of Models CPU Time GPU Time

1,000

0.0807247

2.8194021

10,000

0.7487024

2.897539

100,000

7.3399314

6.0505344

1,000,000

73.46854

13.9881328

10,000,000

735.9741626

117.7903842

 

 

 

 

CPU time grew exactly linearly: ten times as many models took ten times as long to process.  The same was not true for the GPU.  The GPU experiences overhead copying the data over to local memory and back out again – this appears to take around two seconds.  For data sets below 50000 models the CPU approach was actually faster, but then the difference became pronounced.

One Response to HPC In The Cloud

  1. Pingback: Service Technology Symposium 2012 – Talks update 1 | Kevin Fielder's Blog

Leave a comment