Slide show

60 godina elektronike i računarstva na Ruđeru 

Prikaz  filma“Mudar odabir”, posvećen svima koji su našli posao na Ruđeru
Dobrodošlica ravnateljice Danice Ramljak
Pozdrav voditelja Centra za informatiku i računarstvo Karolja Skale,
 Predstojnika Zavoda za elektroniku Tomislava Šmuca
Telekonferencing javljanja
Predavanja
Obilazak novog Ciklotrona, Zavoda za elektroniku i Centra za informatiku i računarstvo 
Prigodni domjenak

Read more...
Framework
In addition to the abovementioned Grid Service Library Applications, the VEPPAR framework consists of a Web interface and a “glue” enabling different computation elements to interwork. 

The picture below shows how Virtue is used for processing and calculating inputs for PovRay and VEPPAR:

Image

Usage scenario

For many computer based experiments the rendering of final images and converting them to a timesliced movie takes to much time for any single computer to be viable. The VEPPAR framework makes it faster by using Grid resources.

In order to create images from the output of the Mathematical Processor (Virtue), Povray is used as a very usable tool for that purpose. To be able to visualise the Virtue output through Povray the data must be prepared for it, which is done by the VEPPAR framework.

Generally, Virtue programmes are prepared on small data sets on a local computer, until the scientist is happy with the algorithm(s). After that, through the VEPPAR framework the Virtue programme thus prepared is sent, depending on the complexity of the computation for the experiment, specifically depending on the number of time slices, to the Grid as a number of Grid jobs. To enable different rendering styles, the user has the possibility to choose from a list of rendering options before the job(s) are sent to the Grid.

On the Grid, the results generated by Virtue, based on the Virtue programme, and ‘presliced’ by the mentioned VEPPAR framework process, are converted according to the specified rendering options (using the v2pov VEPPAR script) into Povray input files and rendered. Finally the resulting images are collected either as individual image files, integrated into a moving GIF, or into a MPEG movie.

As mentioned above, after developing the necessary algorithms in Virtue, the Virtue programme for each individual frame in the time sequence has to be generated, which is done on the user machine before sending the jobs to the Grid. Actually, usually the Virtue programme is split into two parts, the first part, executed by Virtue on the local (user) machine before sending the experiment to the Grid, generates the time dependent parameters used by the Grid executed Virtue programmes. The VEPPAR script vir2grid.pl is provided to generate Grid Virtue timeslice programmes by replacing the pattern “<PATTERN>” in the Grid Virtue programme by the parameter(s) generated by the timeslicing Virtue programme. The number of generated Grid Virtue programmes will be identical to the amount of parameters (or parameter groups) generated by the timeslicing Virtue programme.

After the vir2grid.pl script is run, and all image type  and rendering options are set, all files necessary for the Grid submission of the job(s) will be created in the output directory. After that the only thing the (grid certified) user has to do is to run the command

glite-WMS-job-submit -o cIds.txt –d delegate --collection jdl/ 

inside the output directory, and all the VEPPAR scripts created by vir2grid.pl (as e.g. the vir2pov scripts) will be submitted to the Grid, and run.

 

Ported on the Grid (Deployed, Usable or other status)

 

Technical and other problems solved during porting/development (short overview with references to D4.3)

When using pipe in the regular expressions in split command in perl 5.8.0, we experienced the problems which crushed our scripts. The same syntax works fine on machines which use perl 5.8.5. To fix the problem code split(/\s*\|\s*/,$linija) was replaced to split(/\|/,$linija).

WMS overload was the main problem in using the grid. Our daily number of submitted jobs is about 500. Sometimes it increases to 2000 (which is actually quite a small number, as for a reasonable high quality movie at least 25-30 frames per second, i.e. 25-30 Grid jobs per second, are necessary, so 2000 time slices is less then a minute and a half of the rendered movie). In the beginning we were sending Povray in input sandbox, but it overloaded WMS too much so we used submitting with delegates. 

That worked fine few days, but overload occurred again, so we had to make the input sandbox as small as we could, and the output sandbox too.  After making the size of the sandbox from 1.4MB to 30kb by using SE, problems occurred again.

Now we had to decrease the quantity of submitted processes, and did it by submitting them as collections. Submission worked fine, there was no more WMS overload, but there were some problems with Romanian sites that were aborting all jobs. That problem was reported to the ticketing system and fixed. There were also problems with jobs sent to Hungarian SZTAKI servers, which were automatically aborted. The problem was reported to ticketing system, and they have fixed it. 

While getting processes whose statuses were successful, some processes didn’t get status “Clear”. That made us to correct our downloading scripts to check if output exists on our disk before downloading it from the grid. In rare cases, we had problems with corrupted files. Unfortunately we didn’t save information about that jobs.

  • Resources, requirements (perceived at the project begin their development and present)
    In our work we need version of perl (5.8.5) on all machines, and a stable Povray (now it’s 3.6). Newest sqlite database binaries would be also usable to us.
  • Overall scientific/social impact of the application (perceived at the project begin and now).
      •  Usage scenario - Application usage (Overall user community) & End-user community involvement:
        Take-up from beginning of project
        Present usage status
        Future assessment
        Produced scientific results
  • Summary of user community involvement based on proposed applications
 

 Visualization and art

hologram-diesel Cheoptics hologram Playground
Brain nessie
 fogscreen
Diesel fashion show
Cheoptics Hologram
 Polygon Playground
Revolutionary Hologram
 Nessie
 FogScreen