Veeeeery long time I don’t update here, again. But there’s a lot of research I’ve been doing with some great friends of mine, in the last year.
So it’s probably time to start sharing something; I hope to find the right time and way to do that in the near future.
First thing I’ll share is point cloud stuff (very promising applications are being born, lately).
Here is a very simple script I did, after reading a forum thread at SOuP, more or less on the topic we’ve researched on.
With Costanzo D’Angelis (another proud 100celler) we’ve developed a C library which lets you write directly Maya’s nParticles’ caches. Details are here (the version shared is being polished and will be released in a new, less clunky version, hopefully soon).
Back to the forum’s topic, what I’ve done is a simple Python script which lets you draw particles in Maya, from a .pts file as source.
With that, you should be able to visualize a subset of particles from the file (or all of them, but then it would get very slow) in a Maya scene.
Here it is
# Maya .PTS to Particles Script import maya.cmds as cmd import maya.mel as mel import re # set user params filePath = "C:\\path\\to\\file.pts" # start from nth particle start = 100000 # set particle limit limit = 100000 # subsample by a certain factor factor = 0.001 posDataList =  colorDataList =  step = 1/factor ptc = cmd.particle() try: cmd.addAttr(ptc, ln='rgbPP', dt='vectorArray') cmd.addAttr(ptc, ln='rgbPP0', dt='vectorArray') cmd.setAttr(ptc + '.isDynamic', 0) # the following is just to speed up visualization cmd.setAttr(ptc + '.particleRenderType',3) except: print 'error encoutered while setting up' # lazily read file with open(filePath) as infile: count = 0 for line in infile: count += 1 if count < start: continue if count%step == 0: data = re.sub(' +',' ',line).split(' ') posData = (float(data), float(data), float(data)) colorData = (float(data)/255, float(data)/255, float(data)/255) posDataList.append(posData) colorDataList.append(colorData) if (count - start) * factor > limit: break # cmd.emit( o=ptc, pos=posDataList, at=('rgbPP'), vv=colorDataList ) # the above should work, according to Maya's Documentation, but it doesn't (Maya crashes) cmd.emit( o=ptc, pos=posDataList ) ptcs = cmd.particle( ptc, q=1, count=1) #print ptcs for (id) in xrange(ptcs): cmd.particle(ptc, e=1, at='rgbPP', vv=colorDataList[id], order=id) cmd.saveInitialState(ptc)
The really slow section in the script is the color assignment; it couldn’t be done at once, apparently (the emit command should allow this but when I tried, maya crashed), so I had to loop through every particle, after emitting them all at once by setting their position.
After creating the particles, one should be able to cache them (although I didn’t test this), so the script is to be run just once.
In order to make the point cloud lighter, there are a few params that you can set:
- start begins storing the cloud starting from the nth point
- limit self-explaining, sets particle limit
- factor subsamples the final particle number by a certain factor
I noticed some strange behaviour with nParticles but with ordinary particle it should work just fine. Some friends of mine tested it with nParticles and said it’s working with those, too. Odd.
- the script parses a specific file format, but it’s to be generalized in order to work with every possible .pts file
- the script absolutely needs a UI (to set the subsampling parameters) and a progress bar to give a feedback on imports’ status
- in the end, this script should be expanded and also synchronize a Maya scene with nCaches we’re producing with our DLL. The converter we’re developing is a C (C++ in the future) application, which makes the whole task much faster
Here’s a first glance at the point cloud we successfully produced with our first implementation (300mb file, 10millions+ particles are cached in around 2 minutes on my dual-quad machine).