We are working on a project for which we are considering PD4ML as a potential library to generate PDF’s… however as part of our trails we experienced that when we are providing 2MB image files and rendering 160 page PDF with entirely those images, we are getting a file of size not more than 3MB, I believe the image files are being compressed and put into PDF.. we want the high-definition images to be taken as is into PDF and the file should be generated.
Can you confirm if there is a setting to avoid compression..
That depends on where the document volume comes from.
If the document is bulky because of bulky images – it ok. PD4ML can temporarily unload static resources to cache and read them again at PDF writing phase.
If it is bulky because of a lot of HTML content: it can be problematic. HTML rendering assumes a lot of overhead – even a single standalone whitespace is represented in RAM as a number of Java objects (content itself, HTML attributes, CSS properties, layout info etc). In the case RAM requirements can be too high.
An experimenting with eval version can help a lot to estimate the max doc volume for your particular case.
Also do not forget a possibility to split the the doc to multiple parts (it it is possible for your doc structure), to convert them separately and to finally merge to a single PDF doc from just generated pieces.