Nirvana Performance

  • Parallel I/O: The Nirvana communication protocol is a thin protocol implemented on top of TCP/IP. It supports parallel communications using multiple sockets and a single static port initiated by a Nirvana Client. This greatly enhances bandwidth utilization, especially when sending larger files across WANs. Nirvana's parallel I/O mechanism can be configured to use a single static port through a firewall since the request is initiated by the client. Because the Nirvana Server (or rather its Administrator) has a better understanding of the server's bandwidth availability and speed of the attached storage system, the server controls the number of parallel I/O streams. In Nirvana's parallel I/O mechanism not only Data Objects are transferred using the parallel mechanism but also any other data (like large query results) is transported in parallel.

  • Bulk Operations: As Nirvana Federations grow large in terms of numbers of objects, it becomes more and more critical to be able to handle operations on a large number of objects efficiently. Nirvana supports bulk operations like bulk register, import, export, copy, move, replicate, and delete to get large tasks done much more quickly. Bulk operations feature data streaming in parallel streams so that multiple files can be sent in parallel to a Storage Resource. XML is used as a metadata transport protocol. With bulk operations, thousands of objects can be handled in a single batch job.

  • Latency Minimization: The Nirvana features several mechanisms to reduce latencies that occur during any data transfer:
    1. Caching keeps local copies of frequently accessed Data Objects on fast storage systems or memory
    2. Streaming is the continuous transfer and receipt of data without any noticeable lag time.
    3. Packing aggregates multiple Data Objects into a single buffer before sending the files over the network reducing latencies associated with multiple requests
    4. Replication keeps multiple synchronized copies of Data Objects at different sites
    5. Staging writes Data Objects stored on archival storage media to disk for faster access
    6. Containers transparently aggregate multiple Data Objects into one large file that can be stored and transferred more efficiently than multiple smaller files
    7. I/O Containers allow for a remote execution of multiple commands in large batches using XML as a protocol
    8. Pre-spawn of server processes reduces latencies during the initial connection
  • Scalability: Nirvana's architecture comprised of Clients, Agents, and the MCAT database allows for nearly infinite scalability by adding more Agents as more capacity is needed and scaling the MCAT database using the database vendor's mechanisms (such as Oracle Real Application Clusters).
  • Transaction Architecture: Nirvana's transaction architecture greatly enhances performance for multiple, concurrent users. Nirvana transactions are split into these parts:
    1. MCAT query
    2. Data transfer (optional)
    3. MCAT update (optional)

    This allows Nirvana to keep database locks at a minimum and at the same time be able to roll-back transactions completely. In this architecture an entire bulk operation is a single transaction.