Captive Core V19.0.0

What is captive core?

Captive Core is a specialised, narrowed-down Hcnet-Core instance with the sole aim of emitting transaction metadata to Aurora. It means:

  • no need to separate Hcnet Core instance setup establish
  • no need to Core database: everything done in-memory
  • much faster ingestion Hcnet core v19.3.0

Captive Hcnet Core completely eliminates all Aurora issues caused by connecting to Aurora Core’s database, but it requires extra time to initialize and manage its Aurora Core subprocess. Captive Core can be used in both reingestion (aurora db reingest range) and normal Aurora operation (aurora serve). In fact, using Captive Core to reingest historical data is considerably faster than without it.

How It Works​

When using Captive Core, Aurora runs the hcnet-core binary as a subprocess. Then, both processes communicate over filesystem pipe: Core sends xdr. LedgerCloseMeta structs with information about each ledger and Aurora reads it.

The behaviour is slightly different when reingesting old ledgers and when reading recently closed ledgers:

When reingesting, Hcnet Core is started in a special catchup mode that simply replays the requested range of ledgers. This mode requires an additional 3GiB of RAM because all ledger entries are stored in memory, making it extremely fast. This mode only depends on the history archives, so a Captive Core configuration is not required.

When reading recently closed ledgers, Core is started with a normal run command. This mode also requires an additional 3GiB of RAM for in-memory ledger entries. In this case, a configuration file is required in order to configure a quorum set so that it can connect to the Hcnet network.

Now you configure captive core configuration…………….

  • You have to create a file captive-core.toml
  • Command : sudo vi captive-core.toml
  • Path : /home/ubuntu/captive-core.toml
  • –file start below
  • NAME=”Node1″
  • ADDRESS=”″
  • HISTORY=”curl -sf https://hc-netbucket.s3.us-west-1.amazonaws.com/Node1/{0}
    -o {1}”
  • NAME=”Node2″
  • ADDRESS=”″
  • HISTORY=”curl -sf https://hc-netbucket.s3.us-west-1.amazonaws.com/Node2/{0}
    -o {1}”
  • NAME=”Node3″
  • ADDRESS=”″
  • HISTORY=”curl -sf https://hc-netbucket.s3.us-west-1.amazonaws.com/Node3/{0}
    -o {1}”
  • –file end

    • NAME -> name is required as same as your Node name
    • HOME_DOMAIN – enter your own instance ip
    • PUBLIC_KEY – enter the NODE key that you have generated above, for every Node
      (Node1 key in 1st validator, Node2 Key in 2nd validator & Node3 Key in 3rd
    • ADDRESS – peer address of each node with peer port
    • HISTORY – history (get) of each node

    Running Aurora

    Once your Aurora database and Captive Core configuration is set up properly, you're ready to run Aurora. Run hcnet-aurora with the appropriate parameters set (or hcnet-aurora-cmd serve if you installed via the package manager, which will automatically import your configuration from /etc/default/hcnet-aurora), which starts the HTTP server and starts logging to standard out. When run, you should see output similar to:

    -> Example

    INFO[...] Starting aurora on :8000 pid=29013

    Run this command in folder path -home/go/bin

    This command is used

    • Sudo nohup ./aurora --db-url="postgresql://aurora:aurora@localhost/auroradb"
    • --hcnet-core-url="http://localhost:11626" --network-passphrase="HC MainNet"
    • --history-archive-urls="https://hc-netbucket.s3.us-west-1.amazonaws.com/Node1/"
    • --hcnet-core-binary-path="/home/ubuntu/hcnet-core/src/hcnet-core"
    • --captive-core-config-path="/home/ubuntu/captive-core.toml" --ingest="true" &

    Installing Remote Captive Core​

    If you want to run Captive Core instances for transaction ingestion separately from other Aurora instances, we support that. This architecture allows flexibility in scaling and redundancy. For example, you may want each of your ingesting Aurora instances to have a dedicated Captive Core while your request-serving instances share a single Remote Captive Core for transaction submission. Or perhaps you want a dedicated Remote Captive Core living on more powerful hardware catered towards ingestion.

    Chat Now
    Welcome to HashCash Support