5th of January
Still unable to get it to run, I met with Brad (my supervisor) before the meeting with ASTC was scheduled and inform him about the problem I had. The problem being the lab machines were 32-bit so was the Linux on my laptop and that qemu-arm required super user access. He problem found a 64-bit quad core, that postgraduate student was using, in their research and requested that I have access, and also as such, access to to the honours labs. Brad and myself, went and meet with developers at ASTC at there meeting room. They discussed their ideas onto what my project could entail. It was decided that continuing on with Andrew’s work, would be most beneficial, since there is still plenty of ideas to expand on and research. and using it as a base to expand on is the logical idea, and would have been a waste of the implementation had it not been used. There where four main ideas and points of interest that they were most interest in looking into.

  • Instruction Set Expansion
  • Statistics/Comparisons
  • Off-line Translation
  • Translation Block Chaining

Instruction Set Expansion

The simplest and basic idea for something to look into, is the result for implementing a couple of more instructions to see if additional any significant performance can be squeezed out by doing the next five to ten percent of instructions.

Statistics

Run a series of tests to compare the efficiently and speed of different software. This would also be beneficial for finding the areas where some programs do additional things to get better performance.

Off-line Translation

This idea is apparently where StimulARM is strongest, the idea, is you take the translated blocks and cache/store them while the program is off-line, so when you rerun the program it can load it back in and run again. The ideal use case for this is for development, if the related blocks are indexed by a kind of checksum based on the original arm instructions, then if you change a line in a program and recompile, the code generated should be the same for all but that little part you changed, this the next time you run it would be faster. I am not sure if there was also the suggestion of optimizing off-line, I believe it was hinted at but not really discussed. Where the idea is you run the optimization passes while the program or library is not running.

Translation Block Chaining

This is the big idea and most appealing, being able to collect up blocks up blocks together so you can build a larger block out of smaller ones, then send the whole thing to the optimizer. The reason for this is the more code the optimizer can look the more informed it is to make better improvements to code.

Advertisements