List view
On Big pcap files do (Write - Classify - Analyze) with pipelining on GUI environment. By applying reader-writer problem to Pcap -> Classifier -> Extraction.
No due dateBig Pcap Files can take up to Gigabytes of space. Reading those files and storing them in memory on one shot is indeed not possible, since it will take more than allowed user process space on any OS. To handle big pcap files we have to consider the following. - Supporting file buffering - Segmenting big pcap files into smaller files and distributing the work or do it sequentially. Note: Since Narith already uses Non blocking I/O and multiprocessing. Reading segmented files/buffered data while requesting for more data should not be a problem. Such approach to handling Big pcap files and provide the following: - Good response time in GUI environment. - High resource utilization and time efficiency. Also we need to optimize for memory usage..
No due datePcap and Classifier objects take huge place in memory. For around 20 megs of pcap file. a Pcap objects takes around 70megs, and Classifier 23 megs on initialization and 350 megs after packets classification. A less memory usage can be done by any of the following: - Using memory mapped files. - Storing big data in public class variables to prevent it from getting copied all over. - using dictionaries instead of Linkedlists for packets and protocols (Would require heavy refactoring.) - use strings as raw representation instead of integers A Pcap object is only useful on packet classifying. Else it is not used. On the other hand Classified packets which hold the biggest thunk of memory is used all the time. On analysis and text/files/html reassembly.
No due date