Sunday, June 8, 2014

An FFT for Swift and the Xcode 6 Playground

Here's a preliminary way to view some FFT results using the new Apple Xcode6-beta Swift Playground (a recent Mac and an Apple Developer account required), perhaps useful for a bit of interactive "Matlab-like" DSP experimentation.  I can't yet figure out how to pass numeric vectors to Apple's really fast Accelerate/vDSP functions using the Swift Playground.  So, instead, I translated some ancient not-really-that-fast FFT code from Chipmunk Basic to Swift.

Enter the following Swift code in a new Playground. Wait a bit for the Playground to compute. Then click on the QuickLook bubble to the right of the m[i] variable near the end of this Swift script to see a plot of the FFT results. Enjoy.

import Foundation
// import Accelerate

var str = "Hello, playground"

var str2 = "My First Swift FFT"
println("\(str2)")

var len = 32 // radix-2 FFT length must be a power of 2

var myArray1 = [Double](count: len, repeatedValue: 0.0) // for Real Component
var myArray2 = [Double](count: len, repeatedValue: 0.0) // for Imaginary

var f0 = 3.0  // test input frequency

func myFill1 (inout a : [Double], inout b: [Double], n: Int, f0: Double) -> () {
 for i in 0 ..< n {
 // some test data
 let x = cos(2.0 * M_PI * Double(i) * f0 / Double(n))
 myArray1[i] = Double(x)  // Quicklook here to see a plot of the input waveform
 myArray2[i] = 0.0
 //
 }
 println("length = \(n)")
}

var sinTab = [Double]()

// Canonical in-place decimation-in-time radix-2 FFT
func myFFT (inout u : [Double], inout v : [Double], n: Int, dir : Int) -> () {
 
 var flag = dir // forward
 
 if sinTab.count != n {   // build twiddle factor lookup table
  while sinTab.count > n {
 sinTab.removeLast()
  }
  sinTab = [Double](count: n, repeatedValue: 0.0)
  for i in 0 ..< n {
 let x = sin(2.0 * M_PI * Double(i) / Double(n))
 sinTab[i] = x
  }
  println("sine table length = \(n)")
 }
 
 var m : Int = Int(log2(Double(n)))
 for k in 0 ..< n {
  // rem *** generate a bit reversed address vr k ***
  var ki = k
  var kr = 0
  for i in 1...m { // =1 to m
   kr = kr << 1  //  **  left shift result kr by 1 bit
   if ki % 2 == 1 { kr = kr + 1 }
   ki = ki >> 1   //  **  right shift temp ki by 1 bit
  }
  // rem *** swap data vr k to bit reversed address kr
  if (kr > k) {
   var tr = u[kr] ; u[kr] = u[k] ; u[k] = tr
   var ti = v[kr] ; v[kr] = v[k] ; v[k] = ti
  }
 }
 
 var istep = 2
 while ( istep <= n ) { //  rem  *** layers 2,4,8,16, ... ,n ***
  var is2 = istep / 2
  var astep = n / istep
  for km in 0 ..< is2 { // rem  *** outer row loop ***
   var a  = km * astep  // rem  twiddle angle index
   // var wr = cos(2.0 * M_PI * Double(km) / Double(istep))
   // var wi = sin(2.0 * M_PI * Double(km) / Double(istep))
   var wr =  sinTab[a+(n/4)] // rem  get sin from table lookup
   var wi =  sinTab[a]       // rem  pos for fft , neg for ifft
   if (flag == -1) { wi = -wi }
   for var ki = 0; ki <= (n - istep) ; ki += istep { //  rem  *** inner column loop ***
    var i = km + ki
    var j = (is2) + i
    var tr = wr * u[j] - wi * v[j]  // rem ** butterfly complex multiply **
    var ti = wr * v[j] + wi * u[j]  // rem ** using a temp variable **
    var qr = u[i]
    var qi = v[i]
    u[j] = qr - tr
    v[j] = qi - ti
    u[i] = qr + tr
    v[i] = qi + ti
   } // next ki
  } // next km
  istep = istep * 2
 }
 var a = 1.0 / Double(n)
 for i in 0 ..< n {
  u[i] = u[i] * a
  v[i] = v[i] * a
 }
}
// compute magnitude vector
func myMag (u : [Double], v : [Double], n: Int) -> [Double] {
 var m = [Double](count: n, repeatedValue: 0.0)
 for i in 0 ..< n {
 m[i] = sqrt(u[i]*u[i]+v[i]*v[i])   // Quicklook here to see a plot of the results
 
 }
 return(m)
}

myFill1(&myArray1, &myArray2, len, f0)
myFFT(  &myArray1, &myArray2, len, 1)

var mm = myMag(myArray1, myArray2, len)


// Feel free to treat the above code as if it were under an MIT style Open Source license.
// rhn 2014-June-08

// modified 2014-Jul-10 for Xcode6-beta3 newer Swift syntax

Wednesday, February 12, 2014

Software Defined Radio and IQ sampling

Why is quadrature or IQ sampling used for most Software Defined Radio interfaces and software algorithms? It has to do with the sampling rate, and how the sampling clock (the local oscillator or LO) relates to the signal frequency of interest. The Nyquist frequency is twice the highest frequency. But in practice, given finite length signals, and thus non-mathematically perfectly bandlimited signals, the sampling frequency for DSP has to be higher than twice the highest signal frequency. Thus doubling the number of samples by doubling the sample rate (2X LO) would still be too low. Quadrupling the sample rate (4X LO) would put you nicely above Nyquist rate, but using that much higher frequency would be more expensive in terms of circuit components, DSP data rates, megaflops required, and etc. So most IQ sampling is done with a local oscillator at (or relatively very near) the same frequency as the signal, which is obviously way too low a sampling frequency according to Nyquist. One sample per cycle of sine wave could be at the zero crossings, or at the tops, or at any point in between. You will learn almost nothing about a sinusoidal signal so sampled. But lets call this, by itself useless, set of samples the I of an IQ sample set. But how about increasing the number of samples, not by simply doubling the sample rate, but by taking an additional sample bit a little after the first one each cycle. Two samples per cycle a little bit apart would allow one to estimate the slope or derivative. If one sample was at a zero crossing the other one wouldn't be. So you would be far better off in figuring out the signal being sampled. Two points, plus knowledge that the signal is roughly periodic at the sample rate is usually enough to accurately estimate the unknowns of a canonical sinewave equation (amplitude and phase). But if you go too far apart with the second sample, to halfway between the first set of samples, you end up with the same problem as 2X sampling (one sample could be at a positive zero crossing, the other at a negative, telling you nothing). It's the same problem as 2X being too low a sample rate. But somewhere between two samples of the first set (the "I" set) there's a sweet spot. Not redundant, as with sampling at the same time, and not evenly spaced (which is equivalent to doubling the sample rate), there's an offset which gives you maximum information about the signal, with the cost being an accurate delay for the second sample instead of a much higher sample rate. Turns out that that delay is 90 degrees. That gives you a very useful "Q" set of samples, which together with the "I" set, tells you far more about a signal than either alone. Perhaps enough to demodulate AM, FM, SSB, QAM, etc., etc. (Also posted on http://electronics.stackexchange.com/ on 2014-Feb-12.)

Wednesday, May 23, 2012

Adjusting a Tuning Fork

I wanted something to help demonstrate the accuracy of my HotPaw music tuner apps for iPhone, iPad, and even Macs.  So I ordered a Concert A 440 tuning fork from Amazon.  It arrived in the mail a couple days later.  I quickly tried it out.  Big Problem!  The fork measured 9 cents sharp or 442 Hz.  Thinking something might be wrong with my tuners, I checked the tuning fork against 5 competing tuner apps.  But they all reported the pitch to be around the same amount sharp.  Was my iPhone broken?  To rule that out, I again checked the tuning fork with strobe tuner apps on a different iPhone and also an iPad.  Same result.

This tuning fork was out-of-tune.

But I found a quick-and-dirty fix.

As an online education forum suggested, a interesting school science experiment might be to see if adding weights and changing their position on a tuning fork will change the frequency of the fork. So I played science student, and tried the experiment.

Two big rubber bands on the tines clearly lowered the frequency... by way too much. Next I tried a couple strips of masking tape wrapped around the very end of the each tuning fork tine. That also lowered the frequency, by only a bit too much, to 20 cents flat. So I removed those pieces of tape and carefully measured them. Then I cut some tape strips about 9/29ths as long (actually a few mm longer). Wrapped the new lengths of tape on the fork tines, and the fork rang 3 cents flat. A little more fine trimming of the length of the tape with some sharp scissors, and I got the fork to measure within +-1 cent of 440 Hz, as measured against several calibrated iPhone tuners (both dial and strobe). The fork also now sounds good when played simultaneously with an mp3 of a 440 Hz tone to check for beats.

The tone probably isn't as pure, and tape will eventually age and wear, but this quick fix should work well enough until my next trip to a good music store for a better quality tuning fork.

Saturday, April 14, 2012

Drawing in iOS apps

New iOS programmers often ask how to draw a dot, line, or rect on the display, hoping for some simple method, such as drawDot(x,y), which works simply and immediately.

No such function exists in the iOS frameworks, and there are two big concepts needed to understand the reason why, and what to do instead.

The first concept is asynchronous event-driven programming.  In this software paradigm, an app doesn't tell the OS what to do (such as draw something now).  Instead your app has to declare a method, and requests the OS to call that method back at some later time.  The OS then calls back that method when it is good and ready, which is very often later, not now.

So, on iOS, your app doesn't just draw when it wants to draw.  You app has to instead politely request that the OS to call some a particular method, such as a drawRect, by doing a setNeedsDisplay request on the desired view or subview.  The OS usually complies with this request later, but only after you have exited the current code that your app is running.  And the OS won't comply with your request more often than the frame rate (usually 60 Hz).  Only at those times when the OS chooses to call your drawRect does a view's context actually exist for any drawing.

The second concept is the need to be able to redraw everything in view, even if one only wants to update one dot on top of something drawn a few seconds ago.  You can't just draw a dot or line and expect it to show up on top of much earlier content, stuff that which was drawn beforehand in another callback.  Even though you can see it on the display, nothing drawn one frame beforehand was actually saved for reuse as far as an iOS app is concerned.

In the old days of computing, when computers were the size of refrigerators or larger, there were no glass display screens.  All human readable output was via teletypes or line-printers, very often in another room from the computer. Your computer output was printed on paper, and then fed out the top.  If you needed to add a word, you didn't try to get the computer to somehow modify an existing printed sheet of paper with white-out or something, you just reprinted the entire page.  Or maybe you could stick a post-it note on top of the earlier printed page.  Whereas on early personal computers, a program might be able to change bits on a bitmapped memory display at any time.  That doesn't work under iOS.

iOS view drawing is more similar to the much older page printing model.  The display's bitmap isn't in the same memory as the app.  Your app sends a UIView's graphics contents off to the GPU somewhere else on the chip.  If you want any change anything, you need to send the entire view to the graphics logic again.  There's no way to directly add a pixel or line to an existing view that has already been displayed.  This is because an iOS device's display is connected via a really complicated GPU on another part of the chip, and this connection is pretty much opaque, e.g. write-only, just as if it were in another room via an output-only printer cable.

So how do you draw the usually procedural code way (e.g. add some dots and lines now, and some more dots and lines to that same view later)?  By having your app allocate its own bitmap memory and context, and drawing into that context using Core Graphics.  You can then send that entire bitmap to update a view as needed.

First, we need some instance variables in your view's interface declaration. (These could be global variables for a really tiny app where the code is not meant for reuse or code review.)

    unsigned char   *myBitmap;
    CGContextRef    myDrawingContext;
    CGRect          myBitmapRect;

Note that if you need more than one drawing context, each drawing context will require it's own bitmap.

Here's how to create your own bitmap drawing context:

- (void)makeMyBitMapContextWithRect:(CGRect)r {
    
    int     h = r.size.height;       
    int     w = r.size.width;
    int     bitsPerPixel = 8;
    int     rowBytes = 4 * w;       // for 32-bit ARGB format
    int     myBitmapSize = rowBytes * h;     // memory needed

    myBitmap  = (unsigned char *)malloc(myBitmapSize);

    CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
    if (myBitmap != NULL) {
        // clear bitmap to white
        memset(myBitmap, 0xff, myBitmapSize);     

        myDrawingContext = CGBitmapContextCreate(myBitmap, 
                              w, h, bitsPerPixel, rowBytes, 
                              colorspace, 
                              kCGImageAlphaPremultipliedFirst );
        myBitmapRect = r;
    }
    CGColorSpaceRelease(colorspace);
}

Here's how to draw a dot in your bitmap context:

- (void)drawRedDotInMyContextAtPoint:(CGPoint)pt {
    float x = pt.x;
    float y = pt.y;
    float dotSize = 1.0;
    
    CGRect r1 = CGRectMake(x,y, dotSize,dotSize);
    CGContextSetRGBFillColor(myDrawingContext, 1.0, 0, 0, 1.0);
    // draw a red dot in this context
    CGContextFillRect(myDrawingContext, r1);    
}

But whatever drawing you do in your bitmap context will initially be invisible.  You still have to get the bitmap onto the iOS device's display.  There are two ways to do this. Here's how to draw your bitmap context into a view during the view's drawRect:

// call this from drawRect with the drawRect's current context
- (void)drawMyBitmap:(CGContextRef)context {     
    CGImageRef myImage = CGBitmapContextCreateImage(myDrawingContext);
    CGContextDrawImage(context, myBitmapRect, myImage  );
    CGImageRelease(myImage);
}

Call setNeedDisplay on your view (which needs to be an instance of your subclass of a UIView), and the OS will call the view's drawRect, which can then call the above update method.

If your bitmap drawing context is the exact same size as your view, here's how to update an entire UIView outside of a drawRect.

- (void)useMyBitmapContextAsViewLayer {
    CGImageRef myImage = CGBitmapContextCreateImage(myDrawingContext);
    // use the following only inside a UIView's implementation
    UIView *myView = self; 
    CALayer *myLayer = [ myView layer ];
    [ myLayer setContents : (id)myImage ]; 
    CGImageRelease(myImage);
//  notify OS that a drawing layer has changed
//  [ CATransaction flush ]; 
}

You only have to do a CATransaction flush if you are not calling setNeedsDisplay on your view, for instance, if you are doing drawing in a background thread.

Note that a view update isn't needed after every single item drawn.  Since the actual hardware display never changes more often than the frame rate, which is no more than 60 Hz, a view update is needed no more than once every frame time.  So don't call setNeedsDisplay, or the CALayer update method, more than once every 1/60th of a second. Perhaps use a timer method in an animation loop at a known frame rate to call this, and only if the bitmap is dirty.

Added (2012May22):

So why isn't this the regular paradigm to do drawing on iOS devices?  What's the disadvantage?

There are at least three big disadvantages of this method.  One is that the bitmap can be huge and use up a lot of memory, which is in limited supply to apps on iOS devices.  Second is that you may need to adapt your bitmap size to the resolution and scale of the device.  Or if you use an iPhone bitmap on an iPad, or a regular scale bitmap on a Retina device, the resulting image may look blurry or pixel-ly instead of sharp.  Lastly, this method can be very slow, resulting in a low frame rate or poor app performance, as a huge bitmap needs to be reformatted and copied to the GPU every time your app needs to update the display, which can take a lot longer than just sending a few line or point coordinates if your drawing isn't too complicated.  Apple has optimized their recommended graphics flow (drawing in a view's drawRect) for lower memory usage, better device independence, and good UI responsiveness, all of which benefit iOS device users, even if that makes things more difficult for an iOS app developer.

Tuesday, April 3, 2012

Musical Pitch is not just FFT frequency


In various software forums, including stackoverflow, I've seen many posts by software developers who seem to be trying to determine musical pitch (for instance when coding up yet another guitar tuner) by using an FFT. They expect to find some clear dominant frequency in the FFT result to indicate the musical pitch. But musical pitch is a psycho-perceptual phenomena.  These naive attempts at using an FFT often fail, especially when used for the sounds produced big string instruments and bass or alto voices.  And so, these software developers ask what they are doing wrong.

It is surprising that, in these modern times, people do not realize just how much of what they experience in life is mostly an illusion. There is plenty of recent research on this topic. One of my favorite books on this general subject is "MindReal" by Robert Ornstein. Daniel Kahneman won the 2002 Nobel Prize for ground-breaking research in a related area.

The illusion that our decisions are logical allows us to be susceptible to advertising and con men.  The illusion that we see what is really out there allows magicians and pick-pockets to perform tricks on us. The illusion that what we hear is actually the sound that a musical instrument transmits to our ear is what seems to be behind the misguided attempts to determine the pitch of a musical instrument, or a human voice, by using just a bare FFT.

One reason that frequency in not pitch is that many interesting sounds contain a lot of harmonics or overtones. This is what makes these sounds interesting.  Otherwise they would sound pretty boring, like a pure sine-wave generator might be. The higher overtones or harmonics, after being amplified or filtered by the resonance of the body of a musical instrument or the head of a singer, can often end up stronger than the original fundamental frequency.  Then the ear/brain combination of the listener, finding mostly these higher harmonics in a sound, guesses and gives us the illusion that we are hearing just a lower pitch.  In fact, this lower pitch frequency can be completely missing from the audio frequency spectrum of a sound, or nearly so, and still be clearly heard as the pitch.

So pitch is different from frequency, and musical pitch detection and estimation is different from just frequency estimation.  So pitch estimators look for periodicity, not spectral frequency.

But how can you have a periodic sound that does not contain the pitch frequency in its spectrum? 

It's easy to experiment and hear this yourself.  Using a sound editor, one can create a test sound waveform which shows this effect. Create a high-frequency tone, say a 1568 Hz pure sine wave (a G6). Chop this tone into a short segment, say slightly shorter than the 100th of a second. Repeat this high-frequency tone segment 100 times per second.  Play it.  What do you hear? It turns out you don't hear the high-frequency tone. An FFT will show most of the waveform magnitude in a frequency bin near 1568 Hz, since that's what makes up the vast majority of the waveform.  Even though the sound you created consist only of high frequency sine wave bursts, you'll actually hear the lower frequency of the repeat rate, or the periodicity.  A human will hear 100 Hz.

So to determine what pitch a human will hear, one needs a periodicity or pitch detector or estimator, not a frequency estimator.

There are many pitch detection or pitch estimation methods from which to choose, with varying strengths, possible weaknesses, as well as differing computation complexity.  Some of these methods include autocorrelation, or other lag estimators such as AMDF or ASDF.  A lag estimator will look for which next segment of waveform in time is the closest to being a copy of the current segment.  Lag estimators are often weighted, since periodic waveforms have multiple repetition periods from which to choose.  Other pitch estimation methods include, in no particular order,  Cepstrum or cepstral methods, harmonic product spectrum analysis, linear predictive coding analysis, and composite methods such as YAPT or RAAPT, which may even involved some statistical decision analysis.

It's not a simple as feeding samples to an FFT and expecting a useful result.

Wednesday, October 19, 2011

Memories of my working with Steve Jobs


Vignette's of my working with Steve Jobs circa 1980 to 1982

Number Crunching Apple II hardware

I started at Apple as an Apple II peripheral designer.  One of my first projects was to design the Super Serial Card for the Apple IIe, which was on the official Apple price list for around 1 decade, one of the longest Apple products ever in continuous production.  During this time, all of Apple's engineering (for Apple II, III, Lisa, R&D, etc.) fit into one building.  For my R&D portion of the job, I designed and built several prototype Apple I peripherals.  One of the weirdest was a way to connect an Intel 8087 math coprocessor to share memory with an Apple II.

I thought it was great to show off that an Apple II could floating-point number crunch with the performance of a minicomputer.  Steve walked over to my engineering lab bench, asked what I was doing.  I told him.  He then challenged "why would anyone want to need a personal computer to do that fast numerical computation"?  I paused, a bit lost for words.  He gave me one of those looks that I was just wasting Apple's time, and stalked off.  Even if I was designing something destined to be a standard part of the future of personal computing, Steve wanted me to be able to clearly show why.

Joining the Mac Team

Burl Smith was the original Mac Hardware designer; but he first joined the Apple II lab as a hardware technician, with an engineering lab bench very near mine.  During this time, we helped each other out by doing design reviews of each other's hardware schematics.  Probably because I had worked with Burl, I later became the 6th or 7th hardware engineer to join the Mac team.  Burl was the lead digital logic designer, Dan Kottke was wire wrapping prototypes, Ed Riddle had temporarily joined the team to do the keyboard controller, Hap Horn and George Crow were doing analog design.  (And Brian Howard was officially supposed to be doing documentation, but might have actually spent more time helping Dan Kottke with building and testing the prototype Mac hardware.)

I originally was brought onto the Mac team to help Ed Riddle with the keyboard controller, but as soon as I moved over to Texaco Towers (Apple's skunk words location across the road from their main campus), I was assigned to help Wendell Sanders with a possible re-design of Woz's disk controller, later named the "Integrated Woz Machine" or IWM.  The IWM chip, when finished, went into both the Mac and the Apple IIe.

Steve often came by Texaco Towers.  I don't recall him saying too much about the hardware. I do remember his attention to detail regarding the case and keyboard design and the precise sizing of its chamfers.

I also recall him talking about a bet he made with the head of the Lisa project, circa early 1982, with him betting that the Mac would sell something like 10,000 units before the Lisa did.

Buying dinner at FJLs (Frankie, Johnnie, and Luigi too) Pizza

One night Steve came by the lab, saw that most people were working late and decided to take us all out for dinner.  We drove over to a favorite late-night pizza restaurant in Mountain View, "Frankie, Johnnie, and Luigi too", better known as just FJLs.  At the end of the meal, Steve threw down his credit card to pay.  The server came back a few minutes later and told Steve his card had expired.  To save Steve the embarrassment, I quickly gave my credit card to the waiter.  So I ended up buying Steve dinner.  Don't remember whether I ever tried to file an expense report for this bill.

Apple "leaked from the top".

The team occasionally had friends and other Apple coworkers visit us over at the Mac skunkworks site.   I had my then girlfriend bring her Apple II into the Mac lab so that I could fix some problem with it.  Soon thereafter, there was a management edict that the project was too secret to allow visitors.  But within days of this edict, we found Steve showing off a Mac prototype to Joan Baez and her son.  Later rumors were that they were dating.  Steve even brought Joan to the office Holiday party later that year (where the San Francisco symphony was our musical entertainment for the night).

Color connecter on an early Mac prototype

After the Mac team had grown too big for the Texaco Towers skunkworks site, it moved back to a building in the main Bandley complex.  On a lab bench there, next to mine, Dan Kottke was busy assembling and testing more Mac prototypes.  On one Mac prototype Dan was adding a video test connector.  Steve walked in one day and asked what Dan was doing.  Dan told Steve that this test connector that could also be used to experiment with possible color video output.  I overhead Steve say something like "Stop doing that.  The Mac will not have a color option."

Atari-Apple-Amiga historical connection

Prior to Apple, Steve had worked at Atari.  Also working at Atari at the same time as Steve was Jay Minor.  According to Jay, Nolan Bushnell, who was strapped for cash flow at the time, had offered Jay 5 cents a chip, instead of a salari, to design the ASIC for what later became the Atari 2600 game console.  Jay didn't think it would sell that well, but joined Atari taking a salari instead.  The 2600 game console ended up selling millions of units.  After the 2600, Jay also designed the chips for the Atari 400 and 800 personal computers.  Jay later left Atari to help form Zymos, a custom semiconductor vendor.  But Jay still wanted to design his own computers.

Early in the Mac project, when Burrell and the team were deciding that the Mac needed more than 64k of RAM, they decided that the Mac would need a CMOS clock chip that would keep time even when the Mac was unplugged.  At that time, none of the existing solutions were suitable.  So Steve introduced me to Jay, saying Jay was someone he knew from his Atari days, and one of the best chip designers around.  He wanted us to explore whether Zymos could do the custom design and manufacturing for a CMOS clock chip, and later the integrated disk controller chip as well.

Jay later managed to talk the investors in Zymos into letting him leave Zymos to start a new computer game company, and Jay talked my into to leaving Apple to be the system architect and co-founder of Amiga game console.  The Amiga product would be a game console and have a color output, therefore being very different from the Mac (with which I was extremely doubtful that any startup could compete).  Later, Jay decided that the game console should really be a personal computer and have an optional monochrome display.

So that's one of the ways in which Atari, Apple and Amiga history is intertwined.

My signatures in the Original Mac 128K case

As part of the original Mac design team, I was at the signing party for signatures that were to become part of the texture moldings inside the back of the prototypes for the Mac case.

But I left Apple slightly over a year before the Mac was officially introduced, so I wasn't sure that my signature would get left in, even if Apple let the signatures actually go into the finished product.  However, I attended the official Mac product announcement at the 1984 Apple shareholder's meeting; and Steve surprisingly came up to me and personally thanked me and told me that my name was inside the Mac.  I think that was the last time I talked with Steve.

Sunday, July 24, 2011

Estimating iPhone App Store Sales From Rankings - update


Here's what the average daily sales graph might look like for the Top 1% to 10% in popularity of all paid iOS app (out of 270,000 paid apps in the App store).

See my April Musingpaw blog post for details on the power law equation used to estimate these sales.  Note that any top 100 app is in the top 0.03% of all apps, which is thus way off the top left of the scale.