Skip to content

Hardware

A KYNGIN hardware node is a nondescript computer device including but not limited to: CPU, memory, network, and drives. Each node is custom designed and assembled at our facility. All nodes are dedicated, performant, and custom designed/tuned to each clients district->facility pair. Availability is further maximized by grouping all resource-similar servers on distinctly designed nodes tailored to the server types hosted.

By keeping similar servers in node specific groups, we can drastically reduce attack surfaces and increase performance across the fleet. This design maximizes resource availability, security, performance, and value.

Nodes built for KYNGIN / Project Mercury

For mercury nodes, each node houses a collection of hardware virtualized operating systems package in completely unique resource isolated environments. These nodes do not permit compute bursting like the other facilities, but do permit a slightly reduced surface for noisy neighbor effects.

Nodes built for KYNGIN / Other Facilities

For all non-mercury nodes, each node houses a collection of software-virtualized (hypervisor free) operating systems packaged in service-similar jailed environments.

Jails increase security and maximize responsiveness via data locality and proximity techniques. These techniques allow quick and safe data acquisition within resource boundaries while also guaranteeing resource isolation. The benefits of this design allow for large burstable throughput where necessary, while allowing simple management interfaces to control large quantities of servers with little overhead.

Resource Isolation

Resource isolation permits fair resource distribution with security segregation across all cells while increasing security. Noisy neighbor effects are eliminated by not allowing for over-subscription through all parts of the hosting stack. This segregated design allows for easily scheduled migration of servers across nodes when fair service levels are exceeded.

The KYNGIN® software suite handles the virtual machine orchestration and management through interconnected systems daemons. To keep all nodes synchronized and high performance, systems tuning is performed starting at the lowest levels. With additional tuning upward throughout the entire hosting stack to reduce latency where throughput isn't necessary.

Additional performance is realized by tuning for throughput where latency isn't a concern, maximizing parallelization with simple and safe locking mechanisms, and applying burstable resource allocation where idle compute is available.

Depending on facility type, we further tune process prioritization for each security container, service application, and engine within the node itself.