The national infrastructure to accelerate high-performance Artificial Intelligence projects, data sovereignty, and specialized support.
Skynova’s Cloud GPU for AI was developed for companies that need to take AI, 3D modeling, and high-performance workloads to the next level, with dedicated NVIDIA GPUs, low latency, and full compliance in Brazil. No dependence on foreign providers. No risk to sovereignty.

What is Cloud GPU for AI?
Cloud GPU for AI is a specialized cloud computing platform from Skynova, designed to accelerate workloads in artificial intelligence, machine learning, graphics rendering, and complex simulations.
Unlike CPU-only environments, Cloud GPU utilizes NVIDIA graphics processing units in passthrough mode, offering massive parallel computing power capable of reducing AI model training and execution time from days to hours.
With 100% Brazilian infrastructure and data sovereignty, Cloud GPU for AI guarantees extreme performance, LGPD compliance, and specialized human support, all within the national territory.
Why adopt Skynova’s Cloud GPU for AI?
The Cloud GPU for AI proposal aims to combine high performance and corporate governance, delivering an infrastructure capable of supporting AI projects, graphics workloads, and critical simulations with predictability, security, and specialized support.
Where Cloud GPU for AI makes a difference
More and more companies are incorporating artificial intelligence and 3D visualization into their processes. Cloud GPU for AI caters to everyone from AI startups to corporations that need secure, local computing power.
Management and Autonomy Panel
As with all Skynova Cloud solutions, GPU management is done directly through the control panel.
The client can contract the GPU as an additional component of the VM, maintaining full visibility over consumption, performance, and availability.
This integration simplifies operation, allowing the IT team to configure, monitor, and adjust GPU resources with the same level of control already available for CPU, memory, and storage.
Security and Compliance
Performance doesn’t have to come at the expense of security. Cloud GPU for AI operates on the robust foundation of Skynova infrastructure, with protection and compliance protocols equivalent to those of the largest global providers.
Technical Features of the Initial Version
In this first phase, Cloud GPU for AI offers two NVIDIA GPU models in passthrough mode, integrated into Skynova Cloud VMs. These GPUs are ideal for workloads that require machine learning acceleration, rendering, and parallel processing.
| Model | GPU Memory | CUDA Cores | FP32 Performance | TDP |
|---|---|---|---|---|
| NVIDIA L4 | 24 GB GDDR6 | 7.424 | 30,3 TFLOPs | 72 W |
| NVIDIA L40s | 48 GB GDDR6 | 18.176 | 91,6 TFLOPs | 350 W |
Mode: Passthrough in Skynova Cloud VM
Requirement: Active contracted Cloud Server
Compatibility: All Skynova operating system offerings
Hybrid and Flexible Models
Cloud GPU for AI was designed to evolve.
In this first version, the contracting model is GPU on demand, added to Skynova Cloud servers.
In the next phases, the offering will be expanded to include clustering, auto-scaling, and load balancing across multiple GPUs. This ensures flexibility for both test and development environments as well as enterprise production infrastructures.
Customer Service and
Specialized Support
The performance is technical, but the service is human.
Customers benefit from 24/7 nationwide support, assisted onboarding, and technical follow-up from Skynova’s infrastructure engineers. Our team works closely with clients to fine-tune performance, configure AI workloads, and optimize operational costs, always with a consultative approach.
Want to test the power of GPUs in the cloud and accelerate your AI projects?
Speak with a Skynova consultant and receive a personalized technical assessment. We can scale your AI environment in the cloud and demonstrate the real performance gains.
Additional Technical Content
FAQ · Cloud GPU for AI
A GPU (Graphics Processing Unit) is a processing unit designed to execute calculations in parallel—that is, thousands of simultaneous operations.
While the CPU (Central Processing Unit) is optimized for sequential and logical tasks, the GPU is ideal for processing large volumes of data in matrices, making it essential for training AI models, rendering images, and performing complex simulations.
In machine learning and deep learning applications, this means a drastic reduction in processing time and greater energy efficiency compared to traditional processors.
| Feature | CPU | GPU |
|---|---|---|
| Processing type | Sequential | Parallel |
| Nuclei | Few (usually 4–32) | Thousands |
| Typical applications | Operating systems, databases, administrative tasks | AI, rendering, simulations, scientific calculations |
| Speed in AI | Good for logic and control | Excellent for machine learning and neural networks |
In summary: the CPU thinks, the GPU learns. They complement each other — the CPU coordinates, the GPU accelerates.
It is a dedicated GPU service hosted in Skynova data centers in Brazil, which can be attached to existing virtual servers (VMs) in the Skynova Cloud.
The GPU is delivered in passthrough mode, which means that processing occurs directly, with performance equivalent to using physical hardware.
This allows your company to run AI, rendering, and HPC workloads without needing to purchase or maintain its own servers.
In this first phase, NVIDIA L4 (24 GB) and NVIDIA L40s (48 GB) GPUs, both from the Data Center line, are available.
These cards offer exceptional performance for generative AI, inference, 3D rendering, and engineering simulation applications.
Customers can choose the GPU based on their workload and budget, adding the feature as an add-on to their VMs.
No. The GPU is an additional component of the Skynova Cloud virtual machines.
To use it, you need to have an active VM in the Skynova Cloud environment, as the feature is configured and monitored within the same control panel.
The process follows Skynova’s standard of simplicity:
- Purchase the GPU component via the panel.
- The technical team performs the activation in the Cloud environment.
- The customer can now view and monitor GPU usage directly from the control panel.
No additional tools are needed, and management can be done with the same autonomy already existing for CPU, memory, and storage.
- Training and inference of AI and machine learning models.
- 3D rendering, VFX, and advanced computer graphics.
- Simulations in engineering, architecture, and computational physics.
- Big data analytics and massively parallel processing.
- CAD/CAM environments that require high graphics performance.
Companies in the technology, engineering, architecture, design, entertainment, education, and scientific research sectors are the main beneficiaries.
But any organization that works with large volumes of data or complex graphical visualization can gain efficiency with dedicated GPUs.
Earnings vary depending on the type of workload, but on average:
- AI training: up to 20x faster than a conventional CPU.
- 3D rendering: reducing processing time from hours to minutes.
- Scientific simulations: acceleration from 5× to 15× in parallel calculations.
These gains translate into shorter delivery times, lower operating costs, and greater agility for innovation.
The model is based on charging per GPU unit, in addition to the resources already contracted from Skynova Cloud (CPU, memory, storage, etc.).
The price is proportional to the GPU model and the contracted usage time.
Customers with high processing volumes can request customized terms.
In the initial phase, only 1 GPU per VM is available.
Skynova is already developing phase 2, which will include multi-GPU, clustering, and auto-scaling for distributed AI and HPC workloads.
Skynova maintains its entire environment in Tier 3 data centers located in Brazil, ensuring compliance with the LGPD (Brazilian General Data Protection Law) and industrial confidentiality.
Furthermore:
- End-to-end encryption.
- Continuous monitoring 24/7.
- Multi-layered access control and authentication.
- Automatic backups in accordance with Skynova Cloud policy.
Yes. The solution is fully compatible with the Skynova ecosystem and with external integrations, such as:
- Microsoft 365, Google Workspace and Veeam.
- Linux and Windows Server environments.
- AI tools and frameworks such as TensorFlow, PyTorch, Keras, ONNX, and CUDA.
All customers have access to:
- 24/7 technical support.
- Assisted onboarding for initial setup.
- Strategic technical support with Skynova engineers.
Our distinguishing feature is our personal, close, and nationwide support — ideal for companies that need reliable production.
The initial version is the first step in an ongoing journey.
In phase 2, Cloud GPU for AI will receive:
- GPU clustering for distributed workloads.
- Intelligent auto-scaling for AI and HPC.
- Advanced dashboard for metrics and consumption.
These improvements will allow companies to operate complete AI infrastructures in Brazil, with full sovereignty and performance.
Didn't find your question?
Our team of experts is ready to clarify all your questions about Skynova’s Private Cloud Computing.
FAQ · Skynova Cloud
Get your questions answered about Cloud Computing and its functionalities.
Cloud computing is the on-demand delivery of IT infrastructure via the internet. Skynova offers this model with its own infrastructure, national operation, and focus on security, autonomy, and corporate performance.
Public: resources shared among various clients, managed by the provider.
Private: infrastructure dedicated to a single company, with greater control and security.
Hybrid: integration between local and cloud environments, with unified management.
Yes. Skynova Cloud is a public cloud operated exclusively by Skynova, with dedicated resources per client and total isolation between environments.
Two main models:
Elastic Cloud: on-demand use with full configuration autonomy via panel.
Reserved Cloud: pre-defined capacity with centralized control by project.
Both can be combined and operate complementarily.
The quota represents a minimum monthly commitment of infrastructure usage, allowing for consumption predictability. Resources within the quota follow their own provisioning and management policy.
Consumption is calculated based on the hours that resources were active. Each resource (VM, disk, traffic, IP) has a minimum accounting granularity of 1 hour.
Components orchestrated via CloudStack, such as VMs, disks, snapshots, traffic, and public IPs are considered for calculation. Additional services, such as backup, licenses, and management, follow separate policies.
Yes. The elastic model allows activating and deactivating resources in real-time. The system accounts for usage time and applies the hourly consumption policy.
• Environment isolation by dedicated VLAN.
• Virtual firewall with customized rules.
• Support for site-to-site and client-to-site VPNs.
• Native Anti-DDoS protection in all zones.
• Backup, snapshots, and 2FA authentication in the panel.
Yes. All data is stored in datacenters located in Brazil, with Tier 3 certification and full adherence to Brazilian legislation, including LGPD.
Yes. The infrastructure and data management processes are aligned with LGPD, ensuring privacy, traceability, and legal security.
• Creation and removal of VMs.
• Management of disks, snapshots, and networks.
• IP assignment, firewall rules, and VPN.
• Consumption visualization by project.
• Export of reports and automated alerts.
Yes. A documented RESTful API is available that allows integrating automations, monitoring resources, orchestrating environments, and managing components programmatically.
Yes. The client can create multiple projects within the panel, each with its own consumption and billing cycle, facilitating the management of segregated environments.
• 24/7 technical support with in-house team.
• National service, specialized in critical infrastructure.
• Assisted onboarding and post-implementation follow-up.
• Proactive monitoring and preventive action on incidents.
Yes. Skynova’s migration team acts throughout the entire process: diagnosis, planning, execution, testing, and support after go-live, ensuring continuity and security.
Yes. Managed Cloud is an additional offering that includes continuous administration of operating systems, networks, security, and maintenance, with periodic reports and corrective actions.
Skynova uses KVM and Xen virtualizers, with automatic load balancing between hosts and transparent migration of virtual machines.
Skynova operates five Availability Zones (AZs) in Brazil: SP1, SP2, SP3, VIN1, and NE1. All data centers are Tier 3, with redundant links of up to 100 Gbps, power protection, and advanced physical security.
Yes. Skynova Cloud provides full support for containers, with orchestrated Kubernetes clusters, load monitoring, and automatic scaling of pods and nodes.
Yes. The hybrid model allows you to connect your on‑premises infrastructure to Skynova Cloud, unifying management and leveraging the best of both worlds.
Yes. Partners can operate Skynova Cloud with a customized panel, manage their own clients, and offer services under their own brand.
• Skybox (file storage).
• SkyOffice (online collaboration).
• Talk (corporate communication).
• Corporate backup with Veeam.
• DNS, email, hosting, and antispam services.
Didn't find your question?
Our team of experts is ready to clarify all your questions about Skynova’s Private Cloud Computing.