Table of Contents
- The Hyperconverged Stack MSPs Are Building in 2026
- Hypervisor Nodes: Proxmox VE 9 on Current-Generation Silicon
- The 25 Gbps Private Ceph Fabric
- Firewall Appliances: Customer-Managed OPNsense or Managed FortiGate as a Service
- NAS and ELK Server: Bulk HDD, Hot NVMe, Elasticsearch Heap
- Network Control: Self-Service VLANs, Routed Subnets, BGP
- Compliance: SOC 2 Type II, HIPAA, and PCI DSS 4.0 Built In
- Peering That Makes Both VPN Workarounds Disappear
- Questions Every Hyperconverged RFQ Asks
- Hyperconverged Does Not Have to Be Hard
Growing MSPs, cybersecurity firms, and compliance-driven service providers regularly approach us with detailed Requests for Quotation (RFQs) and ambitious infrastructure requirements. They are not looking for basic hosting. They need modern hypervisor clusters, high-speed private networking for Ceph, dedicated security appliances, dense storage, flexible networking, audited data center environments, and aggressive migration timelines.
A recent RFQ captured the real of modern infrastructure design projects: demanding technical specifications, clear compliance obligations, and a deployment window with little margin for delay. On paper, that is seven servers, three operating systems, fourteen network interfaces, ten VLANs, and a data center audit trail. In practice, it is a normal Tuesday for Atlantic.Net.
This post walks through the hyperconverged stack MSPs are building in 2026, the buying questions that come up in every RFQ, and where Atlantic.Net’s dedicated server platform and managed services fit into a customized solution.
The Hyperconverged Stack MSPs Are Building in 2026
Hyperconverged infrastructure (HCI) collapses compute, storage, and networking onto a single cluster of commodity servers. The software layer handles the job that dedicated SAN arrays and top-of-rack switches used to. Proxmox runs the virtual machines. Ceph, an open-source distributed storage platform, pools drives across every node and replicates each write three times. A next-generation firewall guards the perimeter. Ceph calls each per-drive process an Object Storage Daemon, or OSD.
Proxmox VE 9.1 with hyperconverged Ceph has quietly become the platform MSPs choose when they want predictable cost, modern hardware, and open licensing. Proxmox VE 9 ships with Ceph v19.2 “Squid” by default, LZ4 compression enabled, and improved BlueStore handling for snapshot-heavy workloads. Pair it with a next-generation firewall for the edge and a storage-dense NAS running Proxmox Backup Server and an ELK stack (Elasticsearch, Logstash, and Kibana, the open-source log analytics trio), and you cover compute, storage, security, and observability in one bill of materials.
The architectural shape is consistent across almost every RFQ we see:
- Four or more hypervisor nodes running Proxmox VE with hyperconverged Ceph OSDs
- A dedicated 25 Gbps Layer 2 (L2) private fabric carrying Ceph cluster and Ceph client (public) traffic
- Two perimeter firewall appliances are the only internet-facing machines in the environment
- One storage-dense NAS and backup and observability server with both HDD capacity and NVMe hot tiers
- A routed /26 public subnet assigned exclusively to the firewalls, which NAT every other server
Hypervisor Nodes: Proxmox VE 9 on Current-Generation Silicon
A hypervisor node in a Ceph cluster has three jobs: run the virtual machines, serve Ceph storage to every other node in the cluster, and survive a peer going down without hesitation. The spec list MSPs tend to map cleanly to those jobs.
- CPU: Current-generation AMD EPYC 9004 or 9005 (Genoa, Turin) or Intel Xeon 4th, 5th, or 6th Gen Scalable (Sapphire Rapids, Emerald Rapids, Granite Rapids).
- RAM: 512 GB DDR5 ECC per node runs a four-node cluster with virtual desktop infrastructure (VDI) workloads under HIPAA scope, Ceph OSD services, and monitor daemons comfortably. 2 TB cluster total.
- Boot and OS drives: 2× 480 GB or larger SSD or NVMe in a hardware RAID-1 or a ZFS mirror (ZFS is the copy-on-write filesystem with checksums and snapshots built in), kept separate from the OSD drives. Proxmox, monitors, and logs live here.
- Ceph OSDs: 4× 3.84 TB enterprise NVMe drives per node (NVMe is the PCIe-attached flash standard that replaced SATA SSDs for latency-sensitive work), passed through as individual block devices. No hardware RAID on the OSD drives. The host bus adapter (HBA) runs in IT mode, also known as JBOD (“just a bunch of disks”), so Ceph owns the disk layer end to end.
Hardware compatibility with the Proxmox VE 9.1.5 matrix (CPU, NIC, HBA, IPMI) is a pre-provisioning conversation between customer and provider, not a post-delivery surprise.
The 25 Gbps Private Ceph Fabric
Ceph performance is a network problem before it is a disk problem. With NVMe OSDs and three-way replication, every write has to hit two additional nodes across the Ceph fabric before it is acknowledged, and a single drive failure triggers a cluster-wide rebuild that shifts gigabytes between nodes in seconds.
A 10 Gbps link collapses under that recovery storm and chokes even ordinary writes. 25 Gbps absorbs it. The Proxmox project documents 10 Gbps as the floor and 25 Gbps as the working minimum for production Ceph clusters. 100 Gbps on the backend with data center NVMe is the gold standard.
A hyperconverged build scoped for this RFQ looks like:
- 2× 25 Gbps dedicated private interfaces for Ceph cluster and Ceph public (client) networks, isolated on their own virtual LAN (VLAN)
- 2× 10 Gbps interfaces bonded with LACP, two NICs joined into one logical link for throughput and automatic failover, for VM traffic and Proxmox management on a separate private VLAN
- Zero internet exposure on the hypervisors. All egress routes through the perimeter firewalls.
The network separation keeps Ceph recovery traffic from starving VM I/O and keeps the attack surface on the hypervisors at zero.
Firewall Appliances: Customer-Managed OPNsense or Managed FortiGate as a Service
MSPs approach the perimeter firewall in two different ways, and both patterns appear in RFQs.
The first pattern is customer-managed OPNsense Business. OPNsense runs on FreeBSD, and FreeBSD is picky about network cards. Pick the wrong NIC, and a firewall deployment becomes a two-day driver-debugging session. Mellanox, Broadcom, and Realtek parts range from flaky to unusable under FreeBSD production load. Intel X710, X550, and I350 silicon has mature FreeBSD drivers and behaves. RFQs asking for OPNsense typically specify:
- Single-socket Xeon E3 or E5 class, 4 to 8 cores
- 64 GB DDR4 ECC
- 2× 1 TB or 2 TB SSD in RAID-1
- 2× 10 Gbps Intel NICs with LACP and VLAN support
- IPMI (Intelligent Platform Management Interface, effectively KVM-over-IP baked into the motherboard) with remote ISO mount, so the customer installs OPNsense Business from their own image
The second pattern is managed FortiGate Firewall as a Service. Atlantic.Net offers FortiGate FaaS through a partnership with Fortinet, which suits MSPs that would rather not operate the firewall layer themselves. FortiGate brings FortiOS with thirty integrated security and networking functions (firewall, VPN, NGFW, segmentation, distributed NGFW, DDoS defense, SSL inspection, secure SD-WAN, ZTNA) and FortiGuard AI-powered threat intelligence covering IPS, anti-malware, application control, web and DNS filtering, botnet command-and-control detection, data loss prevention, anti-spam, and inline malware prevention. The Fortinet use cases that map to this RFQ shape are Data Center Perimeter, Virtual Firewall, and Hyperscale.
Either way, the /26 routes to the firewall interfaces. The firewalls NAT every hypervisor, VM, and backup target behind them. No public IPs touch the cluster itself.
NAS and ELK Server: Bulk HDD, Hot NVMe, Elasticsearch Heap
The NAS box is the workhorse in most MSP stacks. It holds NFS home directories for VDI users, SMB file shares for corporate workloads, the Proxmox Backup Server datastore, and the ELK hot, warm, and cold tiers. Customers asking for this role typically specify a single storage-dense chassis:
- Dual-socket Xeon with 20 or more cores. ELK indexing is CPU-bound, and Elasticsearch benefits from every core you give it.
- 384 GB DDR4 ECC for Elasticsearch heap and ZFS’s in-memory read cache, the ARC.
- 12× 8 TB or larger SATA or SAS HDD in ZFS RAIDZ2 or RAIDZ3. That is the bulk capacity layer. NFS, SMB, Proxmox Backup Server, and ELK cold data all live here.
- 4 to 5× 3.84 TB NVMe for ELK hot and warm indices, plus ZFS’s write-log (SLOG) and flash read-cache (L2ARC) accelerators.
- 4× 10 Gbps LACP for high-throughput NFS serving to the hypervisor cluster.
A chassis with 12 or more 3.5-inch HDD bays and NVMe slots in the same box is standard enterprise hardware, and storage-dense dedicated server builds accommodate that form factor.
Network Control: Self-Service VLANs, Routed Subnets, BGP
MSPs do not want to open a NOC ticket every time a new client VLAN goes live, and the larger ones want to announce their own IP space via Border Gateway Protocol (BGP), the routing protocol the public internet runs on. Self-service VLAN creation, modification, and deletion is the expected baseline in 2026, and a BGP announcement should be a phone call rather than a project. Modern MSP stacks look for:
- Self-service VLANs via portal and API, with no ticket required. Starting with 10 or more VLANs at launch (Ceph cluster, management, WAN, DMZ, per-client segments) is normal.
- Routed /26 subnets delivered to specific firewall interfaces. Public IPs bind only to the firewall appliances, not to every server.
- BGP announcement of the customer IP space when the customer is ready.
- Network-level DDoS mitigation at the edge.
- Flat-rate bandwidth without 95th-percentile billing games. A 54 Mbps p95 and 15.5 TB monthly transfer profile fits cleanly.
Compliance: SOC 2 Type II, HIPAA, and PCI DSS 4.0 Built In
MSPs serving healthcare and payment clients can’t treat compliance as a bolt-on. Atlantic.Net’s data centers are SOC 2 Type II and SOC 3 Type II audited, with controls for HIPAA (the US Health Insurance Portability and Accountability Act, which governs patient data), HITECH, and PCI DSS 4.0 (Payment Card Industry Data Security Standard, the rules for handling cardholder data) in place. The HIPAA Business Associate Agreement (BAA) is available for infrastructure that handles electronic Protected Health Information (ePHI). Infrastructure compliance is the customer’s floor, not the ceiling. Their application, access controls, and operational processes still matter, but they start from a data center that has already passed the audit.
For MSPs, the heavy lifting on the audit packet (data center SOC 2, BAA language, encryption at rest, access logging, 24/7 SOC monitoring) is already in place on day one.
Peering That Makes Both VPN Workarounds Disappear
Atlantic.Net peers aggressively in Ashburn, Dallas, and New York, with low-latency reach to both US coasts. A well-performing location eliminates the need for a VPN relay to fix site-to-site connectivity and for a third-party remote-access VPN to fix end-user connectivity. Both workarounds tend to disappear on day one of the migration.
Questions Every Hyperconverged RFQ Asks
The same ten questions surface in every hyperconverged RFQ. Short, practical answers:
- Can boot and OS drives be separated from Ceph OSD drives, with OSDs passed through as JBOD? They should be. OSD HBAs want to ship in IT mode, with no hardware RAID on OSDs, so Ceph sees every drive natively and can act the moment one starts to fail.
- What NICs are available for the 25 Gbps private Ceph network? Mellanox ConnectX-5, Mellanox ConnectX-6, and Intel E810 are the common options.
- Is self-service VLAN creation available without NOC tickets? Portal- and API-driven self-service is the expected baseline. Ten or more VLANs at launch is normal for a multi-tenant MSP stack.
- What is the hardware replacement SLA? 4-hour replacement for failed drives and nodes, including parts and labor, is the benchmark for dedicated server providers at this tier.
- Is there a discount for a 12-month commitment? Longer billing terms typically carry a standard discount against month-to-month pricing.
- What is the lead time for seven servers? Weeks, not months. Firm dates belong in the quote.
- Can the NAS chassis take 12 or more 3.5-inch HDD bays plus NVMe slots in the same box? Storage-dense chassis that take both form factors are standard enterprise hardware.
- Does IPMI support remote ISO mount for customer-installed operating systems? No OS should be preloaded when the customer plans to install Proxmox VE 9.1.5 and OPNsense Business themselves. Customers opting for managed FortiGate FaaS avoid the install step at the firewall layer entirely.
- Can a /26 subnet be routed to specific firewall interfaces? Yes. Public IPs should bind only to the firewall appliances, not to every server.
- Is the BGP announcement of the customer IP space supported? A BGP announcement needs a Letter of Authorization for the prefixes being originated and a peering relationship with the upstream, and should be part of the scoping call.
Hyperconverged Does Not Have to Be Hard
Hyperconverged Proxmox and Ceph are technically dense, operationally unforgiving on the network side, and expensive to get wrong. The thing MSPs actually want from a hosting provider is the removal of the parts that should not have been their problem to begin with: hardware compatibility, data center compliance, peering quality, VLAN control, and, for some customers, firewall operations.
Atlantic.Net runs HIPAA-compliant, SOC 2 Type II audited dedicated servers out of well-peered US data centers, with BAAs available, 24/7 US-based support, and FortiGate Firewall as a Service available for customers that prefer a managed edge. Every hyperconverged build is scoped and quoted as a customized solution rather than a fixed catalog SKU, which is what a seven-server RFQ needs by the time it hits the rack.
If your next cluster build looks anything like the RFQ above, we would rather have the conversation than the excuse. Request a dedicated server quote or reach our US-based team 24/7/365.
* This post is for informational purposes only and does not constitute professional, legal, financial, or technical advice. Each situation is unique and may require guidance from a qualified professional.
Readers should conduct their own due diligence before making any decisions.