Hmm, let r₁ be the distance from Sol and r₂ be the average distance between stars.

Then let A₁=4πr₁˛, A₂=πr₂˛, d₁=2πr₁, d₂=2r₂

Then the proportion of stars to include at distance r₁ is p(r₁)=(d₁A₂)/(d₂A₁).

I probably should have used a curved portion of a sphere rather than a disc and an arc rather than a line, but I figured the two would cancel out somewhat. Otherwise I think I got the math right.

Use p(r₁) as probability for a star randomly being included determined randomly for each star.

Of course this assumes the data is even. Our available star data gets sparser with distance as it gets harder to find stars. On the other hand it presumably tends toward brighter and brighter stars as well, so it's probably best to stick with the function as is and add dim stars to fill in; filler should get brighter and more numerous as distance increases.