A place to discuss the science of computers and programs, from algorithms to computability.

Formal proofs preferred.

Moderators: phlip, Moderators General, Prelates

xulaus
Posts: 136
Joined: Thu Jul 03, 2008 11:09 am UTC

This quote from the "What would you do with an infinitely fast computer" thread really got me thinking.
LuminaryJanitor wrote:Something a CS professor pointed out a while ago, to illustrate the size of combinatorial search spaces:

If you enumerated all possible 24-bit 1600x1200 bitmap images, you'd have photos of all your favourite celebrities engaging in all manner of obscene acts.

I'm not looking for porn, god knows that's easy enough to get on the internet, but could people help me narrow the search space here? While there may be huge amounts of interesting pictures out the there they are scattered among vast amounts of boring ones.

My current technique is to use a 6 bit colour space (2 for each of RGB) and 5x5 images (we can always join them together later), the problem is that this still 1045 images. Does anyone have any ideas of how to discard "boring" pictures and how many this will exclude? Everything I can think of has little impact.
Meaux_Pas wrote:I don't even know who the fuck this guy is

Earlz
Gets Obvious Implications
Posts: 785
Joined: Sat Jun 09, 2007 8:38 am UTC
Location: USA
Contact:

Yes after reading that thread I actually made a very small thing *in assembly* to generate every possible image of a screen 320x200x256(color)... yea, try going into your favorite calculator and cutting it down to just monochrome and 100x100.
Spoiler:

Code: Select all

`irb(main):001:0> 2**(100*100)=> 19950631168807583848837421626835850838234968318861924548520089498529438830221946631919961684036194597899331129423209124271556491349413781117593785932096323957855730046793794526765246551266059895520550086918193311542508608460618104685509074866089624888090489894838009253941633257850621568309473902556912388065225096643874441046759871626985453222868538161694315775629640762836880760732228535091641476183956381458969463899410840960536267821064621427333394036525565649530603142680234969400335934316651459297773279665775606172582031407994198179607378245683762280037302885487251900834464581454650557929601414833921615734588139257095379769119277800826957735674444123062018757836325502728323789270710373802866393031428133241401624195671690574061419654342324638801248856147305207431992259611796250130992860241708340807605932320161268492288496255841312844061536738951487114256315111089745514203313820202931640957596464756010405845841566072044962867016515061920631004186422275908670900574606417856951911456055068251250406007519842261898059237118054444788072906395242548339221982707404473162376760846613033778706039803413197133493654622700563169937455508241780972810983291314403571877524768509857276937926433221599399876886660808368837838027643282775172273657572744784112294389733810861607423253291974813120197604178281965697475898164531258434135959862784130128185406283476649088690521047580882615823961985770122407044330583075869039319604603404973156583208672105913300903752823415539745394397715257455290510212310947321610753474825740775273986348298498340756937955646638621874569499279016572103701364433135817214311791398222983845847334440270964182851005072927748364550578634501100852987812389473928699540834346158807043959118985815145779177143619698728131459483783202081474982171858011389071228250905826817436220577475921417653715687725614904582904992461028630081535583308130101987675856234343538955409175623400844887526162643568648833519463720377293240094456246923254350400678027273837755376406726898636241037491410966718557050759098100246789880178271925953381282421954028302759408448955014676668389697996886241636313376393903373455801407636741877711055384225739499110186468219696581651485130494222369947714763069155468217682876200362777257723781365331611196811280792669481887201298643660768551639860534602297871557517947385246369446923087894265948217008051120322365496288169035739121368338393591756418733850510970271613915439590991598154654417336311656936031122249937969999226781732358023111862644575299135758175008199839236284615249881088960232244362173771618086357015468484058622329792853875623486556440536962622018963571028812361567512543338303270029097668650568557157505516727518899194129711337690149916181315171544007728650573189557450920330185304847113818315407324053319038462084036421763703911550639789000742853672196280903477974533320468368795868580237952218629120080742819551317948157624448298518461509704888027274721574688131594750409732115080498190455803416826949787141316063210686391511681774304792596709376`

So yea, you have a huge search space... The very sad thing is there is no way to compute this space because the "filtering" function would actually incur more overhead than actually just going and computing the extra image. Well wait, in a lot of cases...

Ok, so lets take this back up to the 256 color spectrum.. My idea on how to limit the search space is to abort a set of permutations if it is declared that they are not realistic.

My simple rules for realistic is that take that you have these pixels

Code: Select all

`123456789`

You evaluate these rules on each pixel also. But taking pixel 5. If pixels to the left are extremely different colors then pixels to the right can not be and vice-versa. Even though there could possibly be images like this, usually this is the result of pixelation or artifacts, so they can be ignored which cuts down the search space significantly. but that still leaves a ton of search space... and that filtering function will most likely only speed up the computation by a few hundred years..

Also, what language will you be doing this with? Are you saving these generated images to disk or just displaying them live? I'd be kinda interested to help you with it if it's in a language/toolkit I know..
My new blag(WIP, so yes it's still ugly..)
DEFIANCE!

This is microtext. Zooming in digitally makes it worse. Get a magnifying glass.. works only on LCD

Nath
Posts: 3148
Joined: Sat Sep 08, 2007 8:14 pm UTC

Start with low resolution, low-color-depth images, and use those to initialize higher quality images. Repeat.
You can select some promising candidates at each stage, by penalizing highly discontinuous or otherwise unrealistic images.
Better still, write a classifier that takes a bunch of 'interesting' images as input, learns some good predictive features, and tells you which of your candidates are promising.
Knit bits and pieces of promising candidates together using heuristic algorithms such as genetic algorithms. Iterate.

These steps can be put together in various ways. It's a tough problem, but there's a whole bunch of ways to approach it.

xulaus
Posts: 136
Joined: Thu Jul 03, 2008 11:09 am UTC

Earlz wrote:The very sad thing is there is no way to compute this space because the "filtering" function would actually incur more overhead than actually just going and computing the extra image.
I was planning on combating this by seeing is i could reverse engineer filters into generators, so that I only apply a few filters to a smaller search space.
Earlz wrote:Also, what language will you be doing this with? Are you saving these generated images to disk or just displaying them live? I'd be kinda interested to help you with it if it's in a language/toolkit I know..
I haven't decided on a language/tookit yet, I wanted to get an algorithm in my head that would work before I started to code. Chances are it will be in C++ as that is the lowest level language I know how to use. I think I'd be saving images to disk as I have no GUI experience and it would be faster.

Nath wrote:Better still, write a classifier that takes a bunch of 'interesting' images as input, learns some good predictive features, and tells you which of your candidates are promising.Knit bits and pieces of promising candidates together using heuristic algorithms such as genetic algorithms.
Could you link me to some papers or good tutorials on this? I've tried to learn how to do this sort of thing in the past and ended up confused :/

Half of the search space is going to be the NOT of the other half - so if one is intresting so is the other
3/4 of this half is going to be rotations - that 1 is intresting so are the other 3.
I came up with the idea of using standard deviation aswell, if the standard deviation of the pixels is too high the picture is noise, and if it is too low it is uniform.

Thank you both for your help.
Meaux_Pas wrote:I don't even know who the fuck this guy is

Nath
Posts: 3148
Joined: Sat Sep 08, 2007 8:14 pm UTC

xulaus wrote:
Nath wrote:Better still, write a classifier that takes a bunch of 'interesting' images as input, learns some good predictive features, and tells you which of your candidates are promising.Knit bits and pieces of promising candidates together using heuristic algorithms such as genetic algorithms.
Could you link me to some papers or good tutorials on this? I've tried to learn how to do this sort of thing in the past and ended up confused :/

Well, if you're doing this for fun, you could try neural networks. Rate a bunch of images as 'interesting' or 'uninteresting' manually. Give your network the images as input, and train it using backpropagation.

Another option: try to identify features that might be informative -- image contrast and so on. Extract these features from a bunch of images. You can use these as input to a neural network suggested above, or any other classifier (e.g. Naive Bayes, decision trees, support vector machines, Markov networks). There's a huge amount of stuff you could try here.

Yet another option: find some edge detection software. Use these edges as input. Penalize edge discontinuities.

Here's a somewhat related PhD thesis.

xulaus wrote:I came up with the idea of using standard deviation aswell, if the standard deviation of the pixels is too high the picture is noise, and if it is too low it is uniform.

I don't know if I buy this. What about coherent images with multiple highly contrasting regions? A picture of a dark mountain against a light sky?

Earlz
Gets Obvious Implications
Posts: 785
Joined: Sat Jun 09, 2007 8:38 am UTC
Location: USA
Contact:

Nath wrote:Start with low resolution, low-color-depth images, and use those to initialize higher quality images. Repeat.
You can select some promising candidates at each stage, by penalizing highly discontinuous or otherwise unrealistic images.
Better still, write a classifier that takes a bunch of 'interesting' images as input, learns some good predictive features, and tells you which of your candidates are promising.
Knit bits and pieces of promising candidates together using heuristic algorithms such as genetic algorithms. Iterate.

These steps can be put together in various ways. It's a tough problem, but there's a whole bunch of ways to approach it.

How exactly would you go about magically scaling up the size and resolution of a canidate? What would you suggest just grabbing input from /dev/random or actually doing a permutation w/ repetition(counting from 0 to x) to generate images at each stage? With the counting, anything bigger than an 8 byte big image is going to take a lot of time to go through each canidate.
My new blag(WIP, so yes it's still ugly..)
DEFIANCE!

This is microtext. Zooming in digitally makes it worse. Get a magnifying glass.. works only on LCD

Nath
Posts: 3148
Joined: Sat Sep 08, 2007 8:14 pm UTC

Earlz wrote:How exactly would you go about magically scaling up the size and resolution of a canidate?

See section 5.
That paper is talking about a different sort of problem, but it's the same idea. You've got some way to transform an image to make it more interesting. So you take a small image, simply scale it up, and run your transformation. In the linked paper, they're using belief propagation to do the transformation, and they have different objective functions (rather than 'interesting'). But it's the same sort of thing.

xulaus
Posts: 136
Joined: Thu Jul 03, 2008 11:09 am UTC

Nath wrote:
xulaus wrote:I came up with the idea of using standard deviation aswell, if the standard deviation of the pixels is too high the picture is noise, and if it is too low it is uniform.
I don't know if I buy this. What about coherent images with multiple highly contrasting regions? A picture of a dark mountain against a light sky?

You have a point. There are other tests for randomness that can be used instead though. Thankyou for the links they are on my list of stuff to read.
Meaux_Pas wrote:I don't even know who the fuck this guy is

di gama
Posts: 30
Joined: Fri Apr 10, 2009 7:47 am UTC

xulaus wrote:
Nath wrote:
xulaus wrote:I came up with the idea of using standard deviation aswell, if the standard deviation of the pixels is too high the picture is noise, and if it is too low it is uniform.
I don't know if I buy this. What about coherent images with multiple highly contrasting regions? A picture of a dark mountain against a light sky?

You have a point. There are other tests for randomness that can be used instead though. Thankyou for the links they are on my list of stuff to read.

A variation could be to look at the average brightness of the sharpen mask on the image. If it is noise, the mask will be very bright on most pixels, whereas if it is smooth/blurry, it will be darker. This does penalize sharp images slightly, but unless a picture is grainy, it should be smooth in some large areas, with only minuscule boundaries between areas glowing.

Quigibo
Posts: 29
Joined: Fri Nov 20, 2009 9:45 am UTC

Someone did this for just a 5x5 monochrome square to produce every possible interesting "Space Invader". I think the criteria was symmetry, few isolated points, and probably some other stuff. Here is a sample of it:

A programmer's last program: Goodbye World!

Josephine
Posts: 2142
Joined: Wed Apr 08, 2009 5:53 am UTC

so... there was no criterion for contiguousness? that seems important.
Belial wrote:Listen, what I'm saying is that he committed a felony with a zoo animal.

WarDaft
Posts: 1583
Joined: Thu Jul 30, 2009 3:16 pm UTC

That picture contains a good 1/4 or so of all the possible symmetric ones, so it's not being very picky beyond symmetry.
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.

Josephine
Posts: 2142
Joined: Wed Apr 08, 2009 5:53 am UTC

I'd like to see a version of Space Invaders that generates its own invaders...
Belial wrote:Listen, what I'm saying is that he committed a felony with a zoo animal.

Area Man
Posts: 256
Joined: Thu Dec 25, 2008 8:08 pm UTC
Location: Local

If all you want is symmetry, then all you need to generate is 3x5 px images and add itself flipped for 6x5 result.

At such low resolution going from 5 wide to 6 won't give you precisely the same images, but takes the number from 34M to 32K and guarantees symmetry.
Bisquick boxes are a dead medium.

Meteorswarm
Posts: 979
Joined: Sun Dec 27, 2009 12:28 am UTC
Location: Ithaca, NY

Area Man wrote:If all you want is symmetry, then all you need to generate is 3x5 px images and add itself flipped for 6x5 result.

At such low resolution going from 5 wide to 6 won't give you precisely the same images, but takes the number from 34M to 32K and guarantees symmetry.

You'd just flip the left two columns, retaining 5x5ness, and keeping 32K options.
The same as the old Meteorswarm, now with fewer posts!

TheChewanater
Posts: 1279
Joined: Sat Aug 08, 2009 5:24 am UTC
Location: lol why am I still wearing a Santa suit?

@op

When I read that post, what I thought of was indexing an image you already have containing the colors you're looking for into a 6-bit pallet, which is, of course, enough for 64 colors. Then, run the program to generate every possible 8x8 px image using the 6-bit pallet. However, this makes over 10^106 terabytes still.

I thought about instead breaking up a 128x128 8-bit image of something into 2x2 square tiles and try brute forcing every arrangement of these tiles, assuming they don't overlap and you can use the same tile more than once. Then, it performs the same with the end image (offsetting every pixel by 1x1 and wrapping the edges so the same pixels aren't in the same tiles) until you get something. Unfortunately, even a 16x16 source image would take more storage than any modern hard drive can hold.

So, yeah, you'd need some sort of smart image generator or filter. There's no way you can brute force it, even if it's 16x16 px and uses 64 2x2 tiles, and having it show you images that you manually select the most interesting of would take forever. IMO, an image filter that only selects interesting ones would take longer than just having an algorithm to only generate potentially interesting ones in the first place.

http://internetometer.com/give/4279
No one can agree how to count how many types of people there are. You could ask two people and get 10 different answers.

xulaus
Posts: 136
Joined: Thu Jul 03, 2008 11:09 am UTC

I've generated a list of 6455 4x4 "interesting" monochrome images. My maths says that is about 7x107 possibilities for 120x90 images. I've just got to write something to intelligently sort through those now. So you folks might be seeing some results soon .
Meaux_Pas wrote:I don't even know who the fuck this guy is

Technical Ben
Posts: 2986
Joined: Tue May 27, 2008 10:42 pm UTC

I guess it's easier to teach chimps Shakespeare?
You seem to be hitting the same boundary that compression reaches. Eventually, your compression program is larger than the file you want to compress, and takes up more space than it saves.
To pick out interesting random images, your getting close to just trying to match them to already existing images. Or could you define the "interesting" part your looking for?
It's all physics and stamp collecting.
It's not a particle or a wave. It's just an exchange.

Posts: 1419
Joined: Sat Mar 07, 2009 11:33 am UTC
Location: ᘝᓄᘈᖉᐣ
Contact:

My god, I'm just barely beginning to understand the scale of this all. The number of possible 24-bit WSXGA+ (1680*1050) wallpapers alone has 12744406 digits. By comparison, the number of atoms in the observable universe is most likely an 81-digit number.

I wouldn't know how to find which images of 5x5, 6-bit pixels are boring and which are interesting, but with larger images, there are of course some ways to specifically pick interesting ones. Your infinitely fast computer with infinite memory could easily run some facial recognition algorithms and weighted comparisons on each of your [imath]10^{10^{7.105319594853081}}[/imath] WSXGA+ wallpapers, and such algorithms already exist for normal computers. You should similarly also be able to narrow it down to realistic-looking images relatively easily by lighting detection, colour-space restriction, etcetera. So if you are looking for, for example, dissimilar images* containing at least one human face, your Godputer should be able to narrow that number down to, what, a few sextillion or so - and that's using current algorithms.

*By dissimilar, in this case, I mean different people in different poses and the like. There will be many, many images that are exact copies of another image with just a few different pixels. Those images are mostly useless.

Of course, keeping in the Godputer vein, you could employ entirely different techniques. For example, you could easily upload an entire human mind to such a machine, and you will essentially have a real person to sift through the images - a tireless person who can think at infinite speed. Although, I'm not really sure if that's a good thing. The combination of human nature and an infinitely fast consciousness most likely will not be pretty.

Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Contact:

Link wrote:For example, you could easily upload an entire human mind to such a machine, and you will essentially have a real person to sift through the images - a tireless person who can think at infinite speed.

Off topic, but…

That's also almost certainly horribly immoral. It's slavery, after all. It's likely that we'll eventually regard mind-states as being people under the law.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

Berengal
Superabacus Mystic of the First Rank
Posts: 2707
Joined: Thu May 24, 2007 5:51 am UTC
Location: Bergen, Norway
Contact:

Xanthir wrote:
Link wrote:For example, you could easily upload an entire human mind to such a machine, and you will essentially have a real person to sift through the images - a tireless person who can think at infinite speed.

Off topic, but…

That's also almost certainly horribly immoral. It's slavery, after all. It's likely that we'll eventually regard mind-states as being people under the law.
Hey, if I could get infinite CPU time for my brain, I wouldn't mind going through a few googleplex pictures looking for porn in return...
It is practically impossible to teach good programming to students who are motivated by money: As potential programmers they are mentally mutilated beyond hope of regeneration.

Josephine
Posts: 2142
Joined: Wed Apr 08, 2009 5:53 am UTC

Yeah, if that brain can run extremely fast, it's no time at all to them. and you (and it, if it wants it) get your weirdest fantasy fulfilled in pictures. Everybody wins!
Belial wrote:Listen, what I'm saying is that he committed a felony with a zoo animal.

Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Contact:

nbonaparte wrote:Yeah, if that brain can run extremely fast, it's no time at all to them.

Think about what you just said again.

Assuming a mind can run extremely fast relative to our native wetware, all that means is that they experience *vastly* larger subjective stretches of time.

Scanning through all those images isn't any faster to them, subjectively. It takes exactly as much subjective time as it would for you. The fact that this subjective time flashes by in a small amount of objective time doesn't matter. You're still subjecting the mind to millions (or more likely, a much larger power of ten) of subjective years of drudgery.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

Posts: 1419
Joined: Sat Mar 07, 2009 11:33 am UTC
Location: ᘝᓄᘈᖉᐣ
Contact:

Xanthir, that's an interesting point you raise. On an infinitely fast computer, emulating a mind at infinite speed, that mind would actually experience infinite subjective time between instructions from an external operator. It would, perhaps, be a good idea if the infinitely fast computer had two modes: "interaction" and "operation". Interaction mode would have the processes run at whichever speed is required. Operation mode executes a bounded region of code at infinite speed. If the boundary of that section is reached (which it will in zero objective time or finite subjective time), the computer is reset to interaction mode. The picture-sorting mind would experience normal time when it is in interaction mode, and would experience a very long but not infinite span of time when it starts to sort those pictures.

Of course, the advantage of a digital consciousness is that it would be possible, or could be made possible, to "retcon" that mind. Suppose you have a googolplex images you want to sort through, on a stack \$images[(googolplex)]. You give the mind the first image. It decides if the image is worth keeping - if so, it is pushed to the output pile, if not, it is discarded. The image is popped off the stack; length(\$images) is now (googolplex-1). You jump back of the start of the subroutine and repeat until length(\$images) is zero. When the mind is processing, it only ever experiences having sorted 0 or 1 images (ergo, it does not tire, since looking at a single image and deciding whether it is worth keeping is trivial). When it is done processing, it has experienced processing zero images. Letting the mind operate like this has two distinct advantages: (A) it is kept in sync with the real universe, as in the end, it has no recollection of ever spending any subjective time, while according to the outside universe, it has also experienced zero objective time - and (B) at no point does it ever experience having looked at more than one image. Voilà, problem solved.

Josephine
Posts: 2142
Joined: Wed Apr 08, 2009 5:53 am UTC

Link, that's perfect. It may raise some ethical questions if the system is a brain simulation jumping back to a prior state, but that would work brilliantly.
Belial wrote:Listen, what I'm saying is that he committed a felony with a zoo animal.

Posts: 1419
Joined: Sat Mar 07, 2009 11:33 am UTC
Location: ᘝᓄᘈᖉᐣ
Contact:

nbonaparte wrote:Link, that's perfect. It may raise some ethical questions if the system is a brain simulation jumping back to a prior state, but that would work brilliantly.
Psh, with infinitely fast computers, who cares about ethics‽

Seriously, though. There may indeed be some ethical concerns, but they could be minimised if, for example, the uploaded mind was that of the operator him- or herself. That, and with such godlike machines, even the most basic and taken-for-granted things and concepts in today's world would probably be revolutionised soon enough (after the operator finishes his infinite porn, of course). Ethics and humanity already vary wildly between eras and cultures, and an infinite-speed computer would probably be the single most world-altering thing to ever exist (in that hypothetical parallel universe where such a machine can exist) - so such concerns could well become moot.

Berengal
Superabacus Mystic of the First Rank
Posts: 2707
Joined: Thu May 24, 2007 5:51 am UTC
Location: Bergen, Norway
Contact:

Also, one shouldn't forget the power of continuations. They're the time-travel devices of computers.

Essentially, if you were forced to look through googolplexes of pictures and you had infinite time and memory available, what you'd do is take the continuation, pop the first image off the image-stack, push it onto the boring/interesting stack and finally invoke the continuation to find yourself back at the beginning, except with the first picture processed. Repeat until all pictures are processed. You won't remember any of it, except maybe the last picture if you didn't invoke the continuation after that.
It is practically impossible to teach good programming to students who are motivated by money: As potential programmers they are mentally mutilated beyond hope of regeneration.

Aniviller
Posts: 23
Joined: Fri Oct 19, 2007 8:14 pm UTC

I suggest you try to discard junk by detecting images with unordered information. Most images are just random pixels (white noise). Pixels must usually vary smoothly, except on edges of the depicted object. Compute fourier transform of the image and
1) discard images with only lowest-freqency components (these images are blurry gradients)
2) discard images which have more high freqencies that low frequencies: generally, those when amplitude is increasing with frequency (these images are pixel noise)
3) discard images with rather uniform spectrum (not enough variation in amplitudes) and random phases. This one is harder to test - leave it for later (these images are general noise)
4) restrict search to images with certain average brihtness (0th component). (pick only one of the images within a series of equivalent images with different brightnesses).

This way the search space is considerably sized-down. And here's another upside: you don't have to compute a FT of every image you try. This would take a lot of CPU. You can search directly in fourier space and only transform back if the image satisfies the conditions (and perform other tests on it). And fourier space is "smooth": varying one component doesn't produce a pixel spike, like in direct image, but adds some variation across the whole image.

You could also calculate some sort of information density - look under image entropy or something like that.

MHD
Posts: 630
Joined: Fri Mar 20, 2009 8:21 pm UTC
Location: Denmark

Berengal wrote:
Xanthir wrote:
Link wrote:For example, you could easily upload an entire human mind to such a machine, and you will essentially have a real person to sift through the images - a tireless person who can think at infinite speed.

Off topic, but…

That's also almost certainly horribly immoral. It's slavery, after all. It's likely that we'll eventually regard mind-states as being people under the law.
Hey, if I could get infinite CPU time for my brain, I wouldn't mind going through a few googleplex pictures looking for porn in return...

Berengal wrote:Also, one shouldn't forget the power of continuations. They're the time-travel devices of computers.

Essentially, if you were forced to look through googolplexes of pictures and you had infinite time and memory available, what you'd do is take the continuation, pop the first image off the image-stack, push it onto the boring/interesting stack and finally invoke the continuation to find yourself back at the beginning, except with the first picture processed. Repeat until all pictures are processed. You won't remember any of it, except maybe the last picture if you didn't invoke the continuation after that.

You just summarise my thoughts.
I bow to thy immense knowledge wit, Berengal.
EvanED wrote:be aware that when most people say "regular expression" they really mean "something that is almost, but not quite, entirely unlike a regular expression"

Xanthir
My HERO!!!
Posts: 5426
Joined: Tue Feb 20, 2007 12:49 am UTC
Contact:

Berengal wrote:Also, one shouldn't forget the power of continuations. They're the time-travel devices of computers.

Essentially, if you were forced to look through googolplexes of pictures and you had infinite time and memory available, what you'd do is take the continuation, pop the first image off the image-stack, push it onto the boring/interesting stack and finally invoke the continuation to find yourself back at the beginning, except with the first picture processed. Repeat until all pictures are processed. You won't remember any of it, except maybe the last picture if you didn't invoke the continuation after that.

There's still interesting ethical issues if the entity doing the image-checking isn't you. Rather than forcing an intelligent agent to look at a 1eFoo pictures, you're essentially forcing 1eFoo agents to look at a single picture each. How ethical this is depends on how you aggregate, and this will turn out to be a *very* important question as we develop computer intelligences (either native or emulated), when creating vast numbers of agents to help with a task is actually possible.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

Meteorswarm
Posts: 979
Joined: Sun Dec 27, 2009 12:28 am UTC
Location: Ithaca, NY

Berengal wrote:Also, one shouldn't forget the power of continuations. They're the time-travel devices of computers.

Essentially, if you were forced to look through googolplexes of pictures and you had infinite time and memory available, what you'd do is take the continuation, pop the first image off the image-stack, push it onto the boring/interesting stack and finally invoke the continuation to find yourself back at the beginning, except with the first picture processed. Repeat until all pictures are processed. You won't remember any of it, except maybe the last picture if you didn't invoke the continuation after that.

Compu-brain's thoughts:

Hmm. I should probably sort these pictures.

Oh look, they're already sorted. How convenient! Hey, why is it so late?
The same as the old Meteorswarm, now with fewer posts!

Burningmace
Posts: 6
Joined: Mon Feb 08, 2010 1:25 pm UTC

General rule: The more detailed your specifications, the smaller your search space.

If you were _just_ looking for celeb porn, you could define a constraint as "the picture must contain an area of size approximately <w,h> size that matches facial constraints close to <celebrity face>". You could also say that a certain percentage of the image must be the colour of skin (that's what we're going for, right?) and could even work this into your generation algorithm. You could extend this to state that certain spaces of the image should conform to a skin-colour-to-not-skin-color ratio. To reduce the number of random images, you could set a threshold for the number of hard transitions (use entropy calculation and edge detection) in the image.

You could massively reduce the search space by taking a regular pr0n image and cutting out the actors in Photoshop, then only performing the search on the the section you cut out with a 5% edge tolerance. That way you've got a backdrop pre-generatred, and the people can fit into it.

An iterative "genetic" approach to the idea would be to start off with a low resolution (10x10px) image and take only the best (i.e. candidate images that scored within the top 10% of your tests) through to the next generation. Each generation would then have the resolution increased (i.e. 10x10, 20x20, 40x40, 80x80, 160x160...) and the pass-rate reduced by half (i.e. 10%, 5%, 2.5%, 1.25%, etc...). Eventually you'll have images that are visibly close to great results.

TheChewanater
Posts: 1279
Joined: Sat Aug 08, 2009 5:24 am UTC
Location: lol why am I still wearing a Santa suit?

Meteorswarm wrote:Hey, why is it so late?

Late? But it's infinitely fast.

http://internetometer.com/give/4279
No one can agree how to count how many types of people there are. You could ask two people and get 10 different answers.

Technical Ben
Posts: 2986
Joined: Tue May 27, 2008 10:42 pm UTC

I'll go back to my monkeys typing Shakespeare analogy.
Without an infinitely fast computer, you are going to be close to building a program that draws a celeb. With so many rules to find the celeb pictures in your randomly generated pictures, you may as well run the rules backwards to draw the celeb. Your already looking for pixels that are skin coloured, so just draw these. Your writing rules for faces, so just draw these too.

It just seems the wrong way to do it.
It's all physics and stamp collecting.
It's not a particle or a wave. It's just an exchange.

Meteorswarm
Posts: 979
Joined: Sun Dec 27, 2009 12:28 am UTC
Location: Ithaca, NY

TheChewanater wrote:
Meteorswarm wrote:Hey, why is it so late?

Late? But it's infinitely fast.

I was working off this:
Berengal wrote: you had infinite time and memory available,
The same as the old Meteorswarm, now with fewer posts!

TheChewanater
Posts: 1279
Joined: Sat Aug 08, 2009 5:24 am UTC
Location: lol why am I still wearing a Santa suit?

Hey, has anyone ever thought about brute forcing all possible 100%-valid HTML pages within a given filesize? Using the standard Lorem Ipsum stuff for content, you might get a more reasonable selection to automatically sort through and find an awesome layout like no one has ever seen.

http://internetometer.com/give/4279
No one can agree how to count how many types of people there are. You could ask two people and get 10 different answers.

Earlz
Gets Obvious Implications
Posts: 785
Joined: Sat Jun 09, 2007 8:38 am UTC
Location: USA
Contact:

Burningmace wrote:
An iterative "genetic" approach to the idea would be to start off with a low resolution (10x10px) image and take only the best (i.e. candidate images that scored within the top 10% of your tests) through to the next generation. Each generation would then have the resolution increased (i.e. 10x10, 20x20, 40x40, 80x80, 160x160...) and the pass-rate reduced by half (i.e. 10%, 5%, 2.5%, 1.25%, etc...). Eventually you'll have images that are visibly close to great results.

I really like this idea.. I may try to implement something whenever the next time is that I happen to have some free time..
My new blag(WIP, so yes it's still ugly..)
DEFIANCE!

This is microtext. Zooming in digitally makes it worse. Get a magnifying glass.. works only on LCD

LongLiveTheDutch
Posts: 95
Joined: Fri Nov 27, 2009 9:17 pm UTC

As was said already, but I shall drop it down to two words: Facial recognition.

This only works if your end goal is is pictures of Brad Pitt and a cockroach (get it?) involved in something illegal.

makc
Posts: 181
Joined: Mon Nov 02, 2009 12:26 pm UTC

more nice words on subject: spatial colors distribution. the wast majority of such images would be random noise, and I'm sure math guys could easily come up with some statistics/frequency analysis to filter out ~99.999999% of images below certain threshold. the rest would at least qualify for computer-generated art, if not for realistic porn renderings.

Earlz
Gets Obvious Implications
Posts: 785
Joined: Sat Jun 09, 2007 8:38 am UTC
Location: USA
Contact:

makc wrote:more nice words on subject: spatial colors distribution. the wast majority of such images would be random noise, and I'm sure math guys could easily come up with some statistics/frequency analysis to filter out ~99.999999% of images below certain threshold. the rest would at least qualify for computer-generated art, if not for realistic porn renderings.

Yes and the cost of these formulas will probably only be like 0.01ms.. now times that by the image search space and you get a huge number with the E being over 100.
My new blag(WIP, so yes it's still ugly..)
DEFIANCE!

This is microtext. Zooming in digitally makes it worse. Get a magnifying glass.. works only on LCD