V2EX = way to explore
V2EX 是一个关于分享和探索的地方
现在注册
已注册用户请  登录
gesse
V2EX  ›  宽带症候群

今天晚上是不是全球网络有问题?

  •  
  •   gesse · 2021-12-22 23:43:51 +08:00 · 3902 次点击
    这是一个创建于 1067 天前的主题,其中的信息可能已经有所发展或是发生改变。

    感觉很多网站访问速度慢,imgur 也出问题了, 刚才也有 v2exer 反映苹果的提醒数据同步慢等。

    11 条回复    2021-12-23 20:02:36 +08:00
    est
        1
    est  
       2021-12-22 23:50:32 +08:00   ❤️ 4
    四舍五入 aws ≈ 全球网络
    NXzCH8fP20468ML5
        2
    NXzCH8fP20468ML5  
       2021-12-22 23:50:45 +08:00 via Android
    aws 又炸了
    LAMBO
        3
    LAMBO  
       2021-12-22 23:58:19 +08:00 via iPhone
    justdance 刚才全球联机也出现了问题
    EIlenZe
        4
    EIlenZe  
       2021-12-23 00:32:06 +08:00 via iPhone
    prondtoo
        5
    prondtoo  
       2021-12-23 00:55:01 +08:00
    事件
    EC2 operational issue
    开始时间
    2021 年 12 月 22 日 在 8:35:51 晚上 UTC+8
    状态
    开放
    结束时间
    -
    区域 /可用区
    us-east-1
    类别
    问题
    特定于账户
    No
    受影响的资源
    -
    描述
    API Error Rates

    [04:35 AM PST] We are investigating increased EC2 launch failures and networking connectivity issues for some instances in a single Availability Zone (USE1-AZ4) in the US-EAST-1 Region. Other Availability Zones within the US-EAST-1 Region are not affected by this issue.

    [05:01 AM PST] We can confirm a loss of power within a single data center within a single Availability Zone (USE1-AZ4) in the US-EAST-1 Region. This is affecting availability and connectivity to EC2 instances that are part of the affected data center within the affected Availability Zone. We are also experiencing elevated RunInstance API error rates for launches within the affected Availability Zone. Connectivity and power to other data centers within the affected Availability Zone, or other Availability Zones within the US-EAST-1 Region are not affected by this issue, but we would recommend failing away from the affected Availability Zone (USE1-AZ4) if you are able to do so. We continue to work to address the issue and restore power within the affected data center.

    [05:18 AM PST] We continue to make progress in restoring power to the affected data center within the affected Availability Zone (USE1-AZ4) in the US-EAST-1 Region. We have now restored power to the majority of instances and networking devices within the affected data center and are starting to see some early signs of recovery. Customers experiencing connectivity or instance availability issues within the affected Availability Zone, should start to see some recovery as power is restored to the affected data center. RunInstances API error rates are returning to normal levels and we are working to recover affected EC2 instances and EBS volumes. While we would expect continued improvement over the coming hour, we would still recommend failing away from the Availability Zone if you are able to do so to mitigate this issue.

    [05:39 AM PST] We have now restored power to all instances and network devices within the affected data center and are seeing recovery for the majority of EC2 instances and EBS volumes within the affected Availability Zone. Network connectivity within the affected Availability Zone has also returned to normal levels. While all services are starting to see meaningful recovery, services which were hosting endpoints within the affected data center - such as single-AZ RDS databases, ElastiCache, etc. - would have seen impact during the event, but are starting to see recovery now. Given the level of recovery, if you have not yet failed away from the affected Availability Zone, you should be starting to see recovery at this stage.

    [06:13 AM PST] We have now restored power to all instances and network devices within the affected data center and are seeing recovery for the majority of EC2 instances and EBS volumes within the affected Availability Zone. We continue to make progress in recovering the remaining EC2 instances and EBS volumes within the affected Availability Zone. If you are able to relaunch affected EC2 instances within the affected Availability Zone, that may help to speed up recovery. We have a small number of affected EBS volumes that are still experiencing degraded IO performance that we are working to recover. The majority of AWS services have also recovered, but services which host endpoints within the customer’s VPCs - such as single-AZ RDS databases, ElasticCache, Redshift, etc. - continue to see some impact as we work towards full recovery.

    [06:51 AM PST] We have now restored power to all instances and network devices within the affected data center and are seeing recovery for the majority of EC2 instances and EBS volumes within the affected Availability Zone. For the remaining EC2 instances, we are experiencing some network connectivity issues, which is slowing down full recovery. We believe we understand why this is the case and are working on a resolution. Once resolved, we expect to see faster recovery for the remaining EC2 instances and EBS volumes. If you are able to relaunch affected EC2 instances within the affected Availability Zone, that may help to speed up recovery. Note that restarting an instance at this stage will not help as a restart does not change the underlying hardware. We have a small number of affected EBS volumes that are still experiencing degraded IO performance that we are working to recover. The majority of AWS services have also recovered, but services which host endpoints within the customer’s VPCs - such as single-AZ RDS databases, ElasticCache, Redshift, etc. - continue to see some impact as we work towards full recovery.

    [08:02 AM PST] Power continues to be stable within the affected data center within the affected Availability Zone (USE1-AZ4) in the US-EAST-1 Region. We have been working to resolve the connectivity issues that the remaining EC2 instances and EBS volumes are experiencing in the affected data center, which is part of a single Availability Zone (USE1-AZ4) in the US-EAST-1 Region. We have addressed the connectivity issue for the affected EBS volumes, which are now starting to see further recovery. We continue to work on mitigating the networking impact for EC2 instances within the affected data center, and expect to see further recovery there starting in the next 30 minutes. Since the EC2 APIs have been healthy for some time within the affected Availability Zone, the fastest path to recovery now would be to relaunch affected EC2 instances within the affected Availability Zone or other Availability Zones within the region.
    Mirage09
        6
    Mirage09  
       2021-12-23 02:15:37 +08:00 via iPhone
    心疼一波相关 oncall ,第几个 COE 了都
    0gys
        7
    0gys  
       2021-12-23 10:07:44 +08:00 via iPhone
    看来太集中也不是什么好事
    littlewing
        8
    littlewing  
       2021-12-23 12:59:16 +08:00
    知道了,你家 == 全球
    gesse
        9
    gesse  
    OP
       2021-12-23 15:40:52 +08:00
    @littlewing
    已 B
    1sm23
        10
    1sm23  
       2021-12-23 18:48:16 +08:00
    我就说我昨天 git-scm.com 半天打不开!!
    littlewing
        11
    littlewing  
       2021-12-23 20:02:36 +08:00
    @gesse 多谢
    关于   ·   帮助文档   ·   博客   ·   API   ·   FAQ   ·   实用小工具   ·   995 人在线   最高记录 6679   ·     Select Language
    创意工作者们的社区
    World is powered by solitude
    VERSION: 3.9.8.5 · 28ms · UTC 19:41 · PVG 03:41 · LAX 11:41 · JFK 14:41
    Developed with CodeLauncher
    ♥ Do have faith in what you're doing.