Monitoring - Our Engineering team has implemented a fix to resolve the issue that was affecting the DNS resolution of .co top-level domain (TLD).
During this time, users who are using DigitalOcean DNS resolvers in their resources should no longer experience issues related to the DNS resolution.
Our Engineers are currently monitoring the situation. We will post an update as soon as the issue is fully resolved.
Apr 17, 2026 - 22:53 UTC
Identified - Our Engineering team is aware of a widespread, external issue affecting .co top-level domain (TLD). While this incident originates outside of DigitalOcean's infrastructure, you may experience errors when querying a .co domain, regardless of the DNS resolver being used.
Our Engineers are actively deploying temporary backend mitigations to help minimize the impact on our customers. We will continue to monitor the situation closely and post updates as more information becomes available.
Apr 17, 2026 - 21:22 UTC
Scheduled -
Phase 1 maintenance is complete. Phase 2 is scheduled to begin on the 20th of April 2026 (Monday) at 09:00 UTC.
Apr 16, 2026 - 15:05 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Apr 16, 2026 - 09:10 UTC
Scheduled -
Hello,
During the above window, our Engineering team will be performing maintenance on core control plane infrastructure across all regions. Please note that the existing infrastructure will continue running without issue.
This maintenance will be carried out in four phases as outlined below:
16 April 2026 (Thursday), 09:00–15:00 UTC 20 April 2026 (Monday), 09:00–15:00 UTC 21 April 2026 (Tuesday), 09:00–22:00 UTC 22 April 2026 (Wednesday), 09:00–22:00 UTC
Expected Impact:
We do not anticipate any impact, however, there is a small possibility that Control Panel functionality specifically CRUD (Create, Read, Update, Delete) operations may be affected during the maintenance window. All running workloads are expected to continue operating normally without interruption.
Our team will be actively monitoring the environment throughout the maintenance, and any unexpected events will be promptly communicated through our status page.
Resolved -
From 17:22 to 17:46 UTC, our Engineering team observed an issue impacting Spaces availability in the NYC3 region. During this time, customers may have encountered 500 errors and degraded performance while accessing Spaces buckets. The issue has now been fully resolved. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.
Apr 17, 18:27 UTC
Resolved -
From 23:20 UTC to 02:00 UTC, users may have experienced elevated error rates due to service instability, which resulted in intermittent HTTP 500 errors and terminated connections.
Our Engineering team has confirmed full resolution of the issue, and all systems are now operating normally.
If you continue to experience any issues, please open a ticket with our support team. We apologize for any inconvenience caused.
Apr 15, 02:56 UTC
Monitoring -
Our Engineering team has implemented a fix for the issue causing elevated error rates due to service instability. We are currently monitoring the situation to ensure stability and confirm that error rates, including HTTP 500 responses, have returned to normal levels.
We will provide a further update once we confirm the issue is fully resolved.
Apr 15, 02:24 UTC
Investigating -
Our Engineering team is investigating an issue causing elevated error rates due to service instability and terminating open connections which causes some 500s. Some requests may fail while we work to resolve it.
We apologize for the inconvenience and will share an update once we have more information.
Apr 15, 01:07 UTC
Resolved -
From 16:07 to 16:50 UTC, Our Engineering team observed an issue with App Platform Deployments in all regions. During this time, App deployments of both new and existing apps would have been affected. Our team has fully resolved the issues as of 16:50 UTC. All new and existing App deployments should now be functioning as expected. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.
Apr 14, 17:57 UTC
Resolved -
Our Engineering team has resolved the issue with resize operations for Managed Databases and should now be operating normally. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience.
Apr 14, 15:19 UTC
Monitoring -
Our Engineering team has taken action to mitigate the issue with resize operations for Managed Databases and implemented a fix . We are monitoring the situation and will post an update as soon as we confirm that the issue is fully resolved.
Apr 14, 14:30 UTC
Investigating -
Our engineering team is investigating an issue impacting resize operations for Managed Databases. During this time, users may experience error when attempting to resize Managed Database via Cloud Control Panel and API in all regions. We apologize for the inconvenience and will share an update once we have more information.
Apr 14, 12:53 UTC
Resolved -
Our Engineering team has confirmed full resolution of the issue with creating Droplets in all regions. Users should be able to create Droplets without issue.
We apologize for the inconvenience. If you continue to face any issues, please open a support ticket from within your account.
Apr 10, 21:06 UTC
Monitoring -
Subject: Droplet Availability in All Regions
Our Engineering team has identifed an issue with Droplet creates in all regions. A root cause has been found, a fix has been put in place and we are currently monitoring the situation to ensure full resolution. Users should be able to create new Droplets at this time.
We will continue to monitor and we will post an update as soon as it is fully resolved. We apologize for the inconvenience.
Apr 10, 20:32 UTC
Completed -
The scheduled maintenance has been completed.
Apr 9, 22:51 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Apr 9, 18:00 UTC
Update -
We will be undergoing scheduled maintenance during this time.
Apr 7, 18:23 UTC
Scheduled -
Start: 2026-04-09 18:00 UTC End: 2026-04-10 24:00 UTC
During the above window, our Engineering team will perform maintenance on core MongoDB services in the BLR1, NYC3, SFO2, SGP1, SYD1 & TOR1regions to enhance security and improve auditing and compliance. Please note that existing databases and workloads will continue to function normally and will not be impacted.
Expected Impact:
We do not anticipate any service disruptions during this window. Your existing databases and workloads will continue to run normally without interruption.
In the event that an unexpected issue occurs, administrative actions, such as creating, deleting, or scaling Managed MongoDB databases in the BLR1, NYC3, SFO2, SGP1, SYD1 & TOR1 regions, may experience delays.
If an unexpected issue arises, we will work to keep any impact to a minimum and may revert the changes if required.
Completed -
The scheduled maintenance has been completed.
Apr 7, 22:25 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Apr 7, 18:00 UTC
Scheduled -
Start: 2026-04-07 18:00 UTC End: 2026-04-08 00:00 UTC
During the above window, our Engineering team will perform maintenance on core MongoDB services in the AMS3, ATL1, LON1, NYC1, NYC2 & SFO3 regions to enhance security and improve auditing and compliance. Please note that existing databases and workloads will continue to function normally and will not be impacted.
Expected Impact:
We do not anticipate any service disruptions during this window. Your existing databases and workloads will continue to run normally without interruption.
In the event that an unexpected issue occurs, administrative actions, such as creating, deleting, or scaling Managed MongoDB databases in the AMS3, ATL1, LON1, NYC1, NYC2 & SFO3 regions, may experience delays.
If an unexpected issue arises, we will work to keep any impact to a minimum and may revert the changes if required.
Resolved -
Our Engineering team has resolved the control plane disruption that occurred from 17:06 to 17:18 UTC. During this time, users may have experienced intermittent issues with managing their resources through the Cloud Control Panel or DigitalOcean API. The root cause of the disruption was identified and addressed, and all services are now operating normally.
If you continue to experience any problems, please open a ticket with our Support team. We apologize for any inconvenience this may have caused.
Apr 7, 17:49 UTC
Resolved -
Service has been fully restored, and the model is now operating normally. We have implemented improvements to enhance stability and reduce the likelihood of similar issues in the future.
Apr 7, 15:50 UTC
Identified -
We are currently investigating reports of elevated latency affecting requests to this model when using Serverless Inference and Agents.
Earlier observations indicated increased error rates for the open-source Qwen 3 32B model. The Ray dashboard also showed multiple workers in a pending state, suggesting capacity constraints.
Our analysis determined that the model was experiencing higher-than-expected request volume without sufficient resources to scale accordingly. To address this, the node pool size has been increased to improve available capacity. However, there are still insufficient nodes to fully support the desired number of model replicas.
Following the node pool expansion, a new pod-related error has been identified. Our Engineering team is actively working to resolve this issue and restore full service performance.
Apr 7, 12:55 UTC
Investigating -
Serverless inference for alibaba-qwen3-32b (Qwen 3 32B) in tor1 is experiencing high error rates starting at 10:46 UTC.
Apr 7, 12:49 UTC