Sunday, August 19, 2012
Bunker Partial Power Failure
19/8/12 20:20 - We're currently experiencing a partial power failure on the mains at the Bunker datacentre. It appears we have lost one of the phases. Unfortunately under these circumstances the generators did not kick in, a manual override has been applied. Equipment has been running on UPS but some of them have exhausted their battery supplies - as soon as these UPS's have enough charge they will start passing current again and the equipment will power up.
20:40 - SP Confirm single phase failure.
20/8/12 08:30 - All work complete and Bunker verified as back on mains power.
Wednesday, July 25, 2012
DSL Network Issues 25/7/2012
15:22
Our wholesale DSL interconnects have just gone down - we're investigating the issue now.
15:26
A router in the wholesale network experienced an unexpected restart - connections are coming back online now. If your DSL router does not connect try a 1 minute power-cycle, if that does not work try a full 15 minute power cycle. Any issues after that please give us a call.
15:26
A router in the wholesale network experienced an unexpected restart - connections are coming back online now. If your DSL router does not connect try a 1 minute power-cycle, if that does not work try a full 15 minute power cycle. Any issues after that please give us a call.
Friday, July 06, 2012
Issues with C2 DSL Network
July 6th 2012 16:56
From approx 15:55 we have lost connectivity to our equipment in Telecity HEX6&7. Remote hands cannot identify any issues so we are therefore sending an engineer to site with spare equipment to be able to investigate and replace as required. We estimate being onsite 9pm weather permitting. An update will follow shortly after that. This is affecting DSL customers and fixed line services provided through that Datacentre. Services provided out of our other datacentres are not affected and our own facility at Hack Green Bunker is also unaffected.
Regards C2 Technical
10:00pm Engineer onsite.
10:54pm Equipment replaced and configuration loaded - DSL connections and fixed line services restored. DSL connections may require a reboot in order to reconnect.
Apologies for the inconvenience.
Regards C2 Technical
From approx 15:55 we have lost connectivity to our equipment in Telecity HEX6&7. Remote hands cannot identify any issues so we are therefore sending an engineer to site with spare equipment to be able to investigate and replace as required. We estimate being onsite 9pm weather permitting. An update will follow shortly after that. This is affecting DSL customers and fixed line services provided through that Datacentre. Services provided out of our other datacentres are not affected and our own facility at Hack Green Bunker is also unaffected.
Regards C2 Technical
10:00pm Engineer onsite.
10:54pm Equipment replaced and configuration loaded - DSL connections and fixed line services restored. DSL connections may require a reboot in order to reconnect.
Apologies for the inconvenience.
Regards C2 Technical
Thursday, April 05, 2012
Emergency Maintenance Notification
We have received notification from our primary transit provider that on 12th April 2012 between 1am and 4am GMT there is a possibility of an upto 20 minute outage.
Regards
Atlas Technical
Regards
Atlas Technical
Tuesday, August 09, 2011
Future works on Manchester ring 19/08/2011
We've received notification from one of our network providers that as part of expanding their network, a number of fibre routes between TeleData and IFL2 require diverting.
This work will take place during the period 19/08/2011 23:00 to 20/08/2011 11:00 (BST).
This will result in a loss of protection on the Manchester ring until the work is complete.
Regards
C2
Wednesday, July 20, 2011
Issues on Manchester Ring
The causes of the issues on the Manchester ring last night are still not yet clear, we know part of the ring went down, what we cant do however is replicate the issue. Each time we manually shut down part of the ring traffic simply flows in the opposite direction. We are aware of issues with our network provider, they've been working on the TCW to IFL2 links, we will be chasing today for an update as to if they were working on anything at the time. As we cannot replicate the issue it would appear that some outside influence may be affecting the network, however, we will today be taking some emergency investigative works onsite. From 1pm onwards you may see some network blips, we will try and keep these to a minimum.
Apologies for the short notice, however if we can we need to find what the cause was.
Once we have completed our tests we will post a further update.
---
The work is now complete - the suspect switch will be taken back for further tests in the lab, further updates will be posted here.
Apologies for the short notice, however if we can we need to find what the cause was.
Once we have completed our tests we will post a further update.
---
Following on from the tests a switch at TCW, which is part of a pair, has been identified as the most probable cause for the issues and instabilities. As the issue is somewhat intermittent I'm reluctant to pull the switch while it's under load from directly connected clients - the quietest time across all the ports is around 7am, so, we will look to swap the unit out then on Thursday 21st.. Directly connected clients and those using services connected to this switch will drop for a few minutes while the switch is replaced.
---
The work is now complete - the suspect switch will be taken back for further tests in the lab, further updates will be posted here.
Tuesday, December 21, 2010
DSL issues in London
Some C2 customers in the London area may currently be experiencing DSL issues due to an incident at the West End BT exchange. BT engineers are trying to restore services after a flood prompted a fire in the exchange. BT expect to have services restored shortly.
Thursday, December 02, 2010
Scheduled maintenance 6th December 10:00 and 16:00
Hi,
We have been advised by our network provider that as part of their capacity planning they are having to move the current line between IFL and Telecity Williams to another line. Though the window for the required work is between 10:00 and 16:00, they expect that the line will only be down for 1 hour. However the remaining two legs of the ring will remain untouched, so customers should not notice any disruption to service, but the Manchester ring will obviously be more at risk of interruption while testing takes place.
We have insisted that this work be completed while we are in contact with their engineers, so that if we notice anything unexpected on the network, we can get any work reversed immediately.
Kind Regards
Stuart McKindley
We have been advised by our network provider that as part of their capacity planning they are having to move the current line between IFL and Telecity Williams to another line. Though the window for the required work is between 10:00 and 16:00, they expect that the line will only be down for 1 hour. However the remaining two legs of the ring will remain untouched, so customers should not notice any disruption to service, but the Manchester ring will obviously be more at risk of interruption while testing takes place.
We have insisted that this work be completed while we are in contact with their engineers, so that if we notice anything unexpected on the network, we can get any work reversed immediately.
Kind Regards
Stuart McKindley
Monday, November 22, 2010
Removal of secondary mail server mail-relay20.c2internet.net
Due to hardware failure the server mail-relay20.c2internet.net is being removed from service.
This server's only role was to operate as a secondary/backup mx to customers requesting this functionality where those customers operated their own primary mail servers.
While initially this type of setup was the norm as the war against spam continues these type of backup servers have been targetted as easy routes in. This brings rise to a few problems;
It's not uncommon for these backup servers to be whitelisted/trusted by the primary server, thus totally defeating any anti-spam techniques they are utilising. The backup servers will accept all mail for the domains where it is told to be the secondary, if when forwarding that email to the primary server the primary server rejects a mailbox as unknown the backup server will want to send a non-delivery report. If the originating email was from a forged email address then these NDR's clog the system further which just puts extra load on the server for no real good reason. Worst case is the NDR's are sent to a valid email address but one which had nothing to do with the original email, at which point the server is generating backscatter which is every bit as bad as spam.
If the primary mail server was to fail most sending servers will now quite happily queue email, notify the sender of any sending delays and generally look after sending the email again after a few minutes when the server comes back up.
With all this in mind will we shortly be removing all entries from DNS for mail-relay20.c2internet.net. The unusual thing here is customers who have been using the service may well see a drop in the amount of incoming spam to that of which they had been used to.
This does not affect customers that have their own secondary mail servers
This server's only role was to operate as a secondary/backup mx to customers requesting this functionality where those customers operated their own primary mail servers.
While initially this type of setup was the norm as the war against spam continues these type of backup servers have been targetted as easy routes in. This brings rise to a few problems;
It's not uncommon for these backup servers to be whitelisted/trusted by the primary server, thus totally defeating any anti-spam techniques they are utilising. The backup servers will accept all mail for the domains where it is told to be the secondary, if when forwarding that email to the primary server the primary server rejects a mailbox as unknown the backup server will want to send a non-delivery report. If the originating email was from a forged email address then these NDR's clog the system further which just puts extra load on the server for no real good reason. Worst case is the NDR's are sent to a valid email address but one which had nothing to do with the original email, at which point the server is generating backscatter which is every bit as bad as spam.
If the primary mail server was to fail most sending servers will now quite happily queue email, notify the sender of any sending delays and generally look after sending the email again after a few minutes when the server comes back up.
With all this in mind will we shortly be removing all entries from DNS for mail-relay20.c2internet.net. The unusual thing here is customers who have been using the service may well see a drop in the amount of incoming spam to that of which they had been used to.
This does not affect customers that have their own secondary mail servers
DSL Connections via BT
-- 15:15
Fault is now cleared - we will continue to monitor.
Apologies for any inconvenience.
-- 14:26
We're seeing a number of lines coming back up, though as yet have had no notification of this fault being cleared. We will continue to monitor.
-- 13:49
There is an issue affecting a number of tail circuits that are provided over the BT wholesale network. This is affecting a number of ISPs and is not related to anything within our network or anything under our direct control. The issue is being investigated and more information will be posted as soon as is available.
Fault is now cleared - we will continue to monitor.
Apologies for any inconvenience.
-- 14:26
We're seeing a number of lines coming back up, though as yet have had no notification of this fault being cleared. We will continue to monitor.
-- 13:49
There is an issue affecting a number of tail circuits that are provided over the BT wholesale network. This is affecting a number of ISPs and is not related to anything within our network or anything under our direct control. The issue is being investigated and more information will be posted as soon as is available.
Wednesday, November 10, 2010
Service Outage report for 10th November 2010
17:32-
We've just had confirmation that the outage was from two separate faults happening in two separate geographic locations, one fault was on the providers Leeds to Sheffield connection, the other on their Warrington to Birmingham connection.
13:42-
At 10:50am this morning we lost both our west and east-bound connections from Manchester to London, this had the outcome of partitioning our core network into two. This partitioning would have caused routing issues and due to the location of name and radius servers within the network name lookup and xDSL authentication would also have failed.
Our transit feed out of Manchester was also experiencing problems which as this issue cleared at the same time our connections came back up was no doubt down to the same core root problem.
With the issue affecting multiple providers it was clear the problem was itself not within any equipment within our direct control or the outcome of any of our actions within the network.
Our main telephone system is also based out of Manchester however when the server went offline it failed over onto the backup analogue PSTN system, the number of incoming calls obviously proving a challenge.
We're currently in discussion with our network provider for the Manchester to London connections as these routes should be separate and diverse, initially they also went via separate providers however due to consolidation within the market one provider has ended up owning both networks. If it transpires that our provider has without our knowledge or authorisation joined these pathways then of course action will be taken.
At 12:40pm both connections came back up, with the exception of transit our of Manchester once the network had re-converged connections and traffic flows returned to normal. Approximately ten minutes after our connections re-established transit via our transit provider also re-established.
Our apologies for this outage and the inconvenience.
We've just had confirmation that the outage was from two separate faults happening in two separate geographic locations, one fault was on the providers Leeds to Sheffield connection, the other on their Warrington to Birmingham connection.
13:42-
At 10:50am this morning we lost both our west and east-bound connections from Manchester to London, this had the outcome of partitioning our core network into two. This partitioning would have caused routing issues and due to the location of name and radius servers within the network name lookup and xDSL authentication would also have failed.
Our transit feed out of Manchester was also experiencing problems which as this issue cleared at the same time our connections came back up was no doubt down to the same core root problem.
With the issue affecting multiple providers it was clear the problem was itself not within any equipment within our direct control or the outcome of any of our actions within the network.
Our main telephone system is also based out of Manchester however when the server went offline it failed over onto the backup analogue PSTN system, the number of incoming calls obviously proving a challenge.
We're currently in discussion with our network provider for the Manchester to London connections as these routes should be separate and diverse, initially they also went via separate providers however due to consolidation within the market one provider has ended up owning both networks. If it transpires that our provider has without our knowledge or authorisation joined these pathways then of course action will be taken.
At 12:40pm both connections came back up, with the exception of transit our of Manchester once the network had re-converged connections and traffic flows returned to normal. Approximately ten minutes after our connections re-established transit via our transit provider also re-established.
Our apologies for this outage and the inconvenience.
Monday, July 26, 2010
Upstream transit provider
-- 9:00AM
We're currently experiencing packet loss on one of our upstream transit providers. The connection needs to remain active for a short while to allow us to run diagnostics before passing the call to our provider.
-- 9:25am
Sessions to this transit provider have now been shutdown and traffic is flowing via alternative pathways, another upstrean provider however is also now showing packet loss so this provider has also been disabled.
We're currently experiencing packet loss on one of our upstream transit providers. The connection needs to remain active for a short while to allow us to run diagnostics before passing the call to our provider.
-- 9:25am
Sessions to this transit provider have now been shutdown and traffic is flowing via alternative pathways, another upstrean provider however is also now showing packet loss so this provider has also been disabled.
Thursday, June 03, 2010
General Service Issues
4:00AM - Customers may be experiencing some general connectivity issues ranging from some Internet hosts being unavailable through to DSL lines not reconnecting after a reboot.
There appears to be a service outage in London which is affecting some of our suppliers, this outage is affecting multiple providers.
If you are experiencing any issues please do not reboot your router at this stage.
More information will be posted as it becomes available
5:40AM - Most of the affected circuits came back up approx 15 mins ago, work is continuing to restore the remainder as soon as possible.
6:50AM - The remaining circuits have now re-established - all connectivity has returned to normal.
There appears to be a service outage in London which is affecting some of our suppliers, this outage is affecting multiple providers.
If you are experiencing any issues please do not reboot your router at this stage.
More information will be posted as it becomes available
5:40AM - Most of the affected circuits came back up approx 15 mins ago, work is continuing to restore the remainder as soon as possible.
6:50AM - The remaining circuits have now re-established - all connectivity has returned to normal.
Friday, March 26, 2010
Browsing / Access delays
Customers are experiences delays in reaching some parts of the Internet.
---
11:04am Sites giving problems are those we would normally access through Telehouse North in London, we have therefore closed down transit and peering at THN which is forcing traffic to take alternative routes through differant transit partners. this has impoved things to some sites however some others remain problematic. The issues causing these delays are located outside of our network and thus outside of our control, we are waiting on updates as to when these problems will be corrected, we will then enable once more peering and transit at THN. Apologies to those customers affected by this issue.
---
11:04am Sites giving problems are those we would normally access through Telehouse North in London, we have therefore closed down transit and peering at THN which is forcing traffic to take alternative routes through differant transit partners. this has impoved things to some sites however some others remain problematic. The issues causing these delays are located outside of our network and thus outside of our control, we are waiting on updates as to when these problems will be corrected, we will then enable once more peering and transit at THN. Apologies to those customers affected by this issue.
Monday, March 15, 2010
C2 21CN ADSL
5pm - We're experiencing some issues with our 21CN connections failing to reconnect after a reboot, BT are investigating, if your connection is 21CN please do not do a manual reboot until the problem is resolved. Thanks.
8:37pm - Problems appear to be resolved, most of the disconnected sessions have come back online - incident should be considered closed, we will however continue to monitor for a while.
8:37pm - Problems appear to be resolved, most of the disconnected sessions have come back online - incident should be considered closed, we will however continue to monitor for a while.
Friday, January 22, 2010
C2 DSL Network
We're experiencing some issues on the DSL network, engineers are investigating, the problem is affecting multiple interconnects at multiple locations but does not appear to be core network related.
Monday, November 30, 2009
Network upgrades
During the course of this week we are increasing our switchport capacity at IFL2 and TCW. We are also taking this oppurtunity to relocate some equipment to increase our internal redundancy and resilience.
While the installations and relocations take place there will be points in time where some circuits and systems will be deemed at risk, however traffic at these points will be manually set to traverse through alternate pathways and routers.
----
Dec 3rd update.
IFL2 Complete.
----
Dec 8th update.
Due to delays in getting some fibre links provisioned at TCW this site will be delayed, new circuits should be in by Dec 18th.
While the installations and relocations take place there will be points in time where some circuits and systems will be deemed at risk, however traffic at these points will be manually set to traverse through alternate pathways and routers.
----
Dec 3rd update.
IFL2 Complete.
----
Dec 8th update.
Due to delays in getting some fibre links provisioned at TCW this site will be delayed, new circuits should be in by Dec 18th.
Monday, October 19, 2009
Core Network
We currently experiencing problems across multiple links of the core network; our providers have been notified and we are waiting for an update.
Update 10:09; we've been told that the problems are down to a major incident in London which is affecting multiple parties, unfortunately it is of a scale which covers both our London based datacentres.
Update 10:18; THN now appears to be stable, all our connections to RBHX are now down (rather than flapping)
Update 10:47; we've now seen RBHX come back online, though no direct confirmation yet from our provider.
Update 13:45; we've just seen a blip on all our connections at RBHX, approx 1 minute.
For the near future services should still be considered at risk.
Update 10:09; we've been told that the problems are down to a major incident in London which is affecting multiple parties, unfortunately it is of a scale which covers both our London based datacentres.
Update 10:18; THN now appears to be stable, all our connections to RBHX are now down (rather than flapping)
Update 10:47; we've now seen RBHX come back online, though no direct confirmation yet from our provider.
Update 13:45; we've just seen a blip on all our connections at RBHX, approx 1 minute.
For the near future services should still be considered at risk.
Monday, September 28, 2009
Manchester Transit HSRP Problems
One of a pair of Cisco routers serving some transit customers is experiencing issues, currently it is not passing packets. This should have caused an automatic fail over to it's pair however for reasons currently unknown it did not fail over. The standby priority for all affected customers has been increased on the operational router so traffic is once again flowing.
During normal operation both routers would carry traffic, some customers having a higher priority on router 1 while the others have a higher priority on router 2. A number of scenarios are tested to ensure fail over does occur so this failure is a bit unusual.
An engineer is currently en-route to verify the status of the router experiencing problems.
During normal operation both routers would carry traffic, some customers having a higher priority on router 1 while the others have a higher priority on router 2. A number of scenarios are tested to ensure fail over does occur so this failure is a bit unusual.
An engineer is currently en-route to verify the status of the router experiencing problems.
Unexpected Reboot of rtr1.thn
At 17:03 rtr1.thn unexpectedly rebooted, customers served via this router would have noticed approx 7 minutes of downtime while the router reloaded.
The router appears to be stable after the reboot though reasons for why it may have rebooted are still being investigated. We will continue to monitor the router closely for the next few hours.
The router appears to be stable after the reboot though reasons for why it may have rebooted are still being investigated. We will continue to monitor the router closely for the next few hours.
Subscribe to:
Comments (Atom)