EMC Centera Server Release Notes Rev.a40
EMC Centera Server Release Notes Rev.a40
EMC Centera Server Release Notes Rev.a40
Server
Version 4.0.1
Release Notes
P/N 085090653
REV A40
1
Product description
Product description
These release notes support EMC® CentraStar® version 4.0.1 and
supplement the EMC Centera® documentation. Read the entire
document for additional information available about CentraStar
version 4.0.1. It may describe potential problems or irregularities in
the software and contains late changes to the product documentation.
These release notes include fixed and known issues reported as of the
publication date of this document. For the most current listing of
fixed and known problems, view them online in Issue Tracker. To
open Issue Tracker, follow these steps:
1. Go to the EMC Powerlink® website at:
http://Powerlink.EMC.com.
2. From the menu bar, select Support > Interoperability and
Product Lifecycle Information > E-Lab Issue Tracker
Information > E-Lab Issue Tracker.
3. To familiarize yourself with how to use Issue Tracker, click E-Lab
Issue Tracker Help in the upper-right corner of the window.
Centera Viewer The following changes have been made to Centera Viewer:
◆ Capability to restart the CentraStar software on a node or to
restart a node has been added to Commands > Nodelist > Restart.
Note that this capability must only be used as part of a procedure
to check or move a modem connection to ensure remote
connectivity and that this procedure must be performed under
the direction of EMC support. Refer to Primus use cases
emc171365, emc171366, and emc171367 for more detailed
instructions.
◆ Capability to identify a node has been added to Commands >
Nodelist. Note that this capability should only be used as part of a
procedure to check or move a modem connection to ensure
remote connectivity. Refer to Primus use case emc171365 for more
detailed instructions.
◆ Commands > Nodelist shows hardware model of node.
Health Report The following changes were made to the Health Report:
◆ The following fields have been added to the Garbage Collection
section of the HTML report:
• overall progress
• phase progress
• sweep frag count
• sweep space reclaimed
• sweep frag count completed runs
• sweep space reclaimed completed runs
Fixed problems
The following are fixed issues for this release:
Self Healing
Host OS Any OS
Symptom Clusters with more than 100 nodes may experience nodes that occasionally restart when there
are multiple regenerations running at the same time.
Fix Summary
Host OS Any OS
Problem Heavy system load may postpone tasks like self healing and Garbage Collection
Symptom Clusters under high system load running CentraStar 4.0 with a high object count may not be able
to clean up critical system resources in a timely manner. As a result EDM may start repairing
disks too often which may unnecessarily postpone other tasks such as self healing and Garbage
Collection.
Fix Summary
Configuration
Host OS Any OS
Problem Adding external role(s) to nodes that are connected to a fiber optic network might fail
Symptom It is not possible to add the access, replication, or management role to nodes that are connected
to a fiber optic network if these nodes did not have the access role before they were upgraded to
CentraStar 4.0. It only works if DHCP is enabled.
Fix Summary
Centera SDK
Host OS Any OS
Problem SDK exists calls might fail for C-Clips referencing blobs stored in CPM with single instancing
Symptom The SDK calls FPClip_Exists and FPTag_BlobExists might fail on a cluster running CentraStar
4.0 for C-Clips referencing blobs that are not yet single instanced and were written to the cluster
in CPM before upgrading to CentraStar 4.0.
Fix Summary
Host OS Any OS
Symptom The SDK fails to read C-Clips which have a copy on a failed disk and are under EBR, returning
error code -10005 (FP_SERVER_ERR).
Fix Summary
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0,
4.0.0p1, 4.0.0p2
Host OS Any OS
Problem During upgrade from CentraStar version 3.0.* to 4.0 the application may receive read errors
Symptom During an upgrade from CentraStar version 3.0.* to 4.0, the application may receive read errors
from the cluster. Retrying the read operation will normally succeed.
Upgrades
Host OS Any OS
Problem Gen4LP nodes with missing roles after the upgrade to CentraStar 4.0
Symptom Gen4LP nodes with the access role which have been added to the cluster before upgrading to
CentraStar 4.0 may not have the newly assigned management and replication roles after the
upgrade.
Fix Summary
Replication
Host OS Any OS
Problem Replicating over a slow network link might cause node reboots on source and target cluster
Symptom Replicating over a network link with insufficient throughput causing significant transfer delays
might cause nodes to reboot on both the source and target cluster.
Fix Summary
Monitoring
Host OS Any OS
Problem Multiple health reports may be received if management roles are removed
Symptom If the last or the last but one of the management roles is removed from a cluster and
ConnectHome is enabled multiple nodes may start sending health reports.
Fix Summary
Host OS Any OS
Symptom SNMP only alerts hardware failures and does not send out other alerts.
Server
Host OS Any OS
Problem Garbage Collection overall percentage complete shows less then 100%
Symptom A node that is added after Garbage Collection is started will not be part of the current Garbage
Collection run. It should report 100% complete for that added node but instead reports 0%
complete. This causes the overall percentage to not reach 100% complete. The next run of
Garbage Collection will correct the completion display.
Replication
Host OS Any OS
Problem Replication cannot be disabled if the replication roles on source or target are removed
Symptom If you want to disable replication completely, you first need to disable replication with the CLI
command set cluster replication before you remove the replication roles of the source and target
cluster. In case replication cannot be disabled because all replication roles are removed, first
add the replication role to two nodes on the source and target cluster and then disable
replication.
Fix Summary
Host OS Any OS
Problem If a C-Clip is re-written without being changed and the C-Clip has triggered an EBR event or has
a litigation hold set, the C-Clip is replicated again
Symptom If a C-Clip is re-written to the cluster without being changed (no blobs added, no metadata
added or changed) and the C-Clip has triggered an EBR event or has a litigation hold, the C-Clip
is replicated again, although this is not necessary. Besides the extra replication traffic, there is no
impact.
Fix Summary
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0,
4.0.0p1, 4.0.0p2, 4.0.1
Host OS Any OS
Problem The reported number of C-Clips to be replicated may show a higher number than what actually is
still due to be replicated
Symptom In certain situations the reported number of C-Clips to be replicated may show a higher number
than what actually is still due to be replicated. This is caused by organic self-healing cleaning up
redundant C-Clip fragments before replication has processed them. Organic self-healing does
not update the number of C-Clips to be replicated when it is cleaning up.
Fix Summary
Found in Version 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1,
3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3,
3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1
Host OS Any OS
Problem No apparent replication progress with CLI command show replication detail
Symptom When replication of deletes is enabled and many (100,000s) deletes are issued in a short time
period it appears as if replication is not progressing when monitored with the CLI show
replication detail command. Replication is, in fact, processing the deletes.
Fix Summary
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0,
4.0.0p1, 4.0.0p2, 4.0.1
Host OS Any OS
Symptom When the Global Delete feature is enabled and C-Clips are deleted soon after they have been
written, the Replication Lag value may increase.
Fix Summary Work-a-round: disable Global delete or increase the time span between creating and deleting
the clip
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1
Host OS Any OS
Problem Replication with failed authentication gives back wrong error message in some cases
Symptom When replication is started with a disabled anonymous profile, the SDK returns the error code
FP_OPERATION_NOT_ALLOWED (-10204) to the application and replication pauses with
paused_no_capability. When replication is started with a disabled user profile, the SDK returns
the error code FP_AUTHENTICATION_FAILED_ERR (-10153) and replication pauses with
paused_authentication_failed. This does not affect the operation of the application.
Fix Summary Consider both error messages as valid for this use case
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1
Host OS Any OS
Problem Replication does not pause when global delete is issued and target cluster does not have delete
capabilities granted
Symptom When the replication profile has no delete capability granted and a global delete is issued, the
deleted C-Clips go to the parking lot. Replication does not get paused.
Fix Summary An alert will be sent when the parking is almost full
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1
Server
Host OS Any OS
Symptom When a manually started GC run is aborted due to a non-uniform cluster version and the
auto-scheduling mode is enabled, the reason for the aborted run will report the auto-scheduling
mode instead of the manual mode.
Fix Summary
Host OS Any OS
Problem A delete of a C-Clip fails if its mirror copy is located on an offline node
Symptom An SDK delete fails when the mirror copy of a C-Clip resides on an offline node. The client will
receive error code -10156 (FP_TRANSACTION_FAILED_ERR) in this case.
Fix Summary
Host OS Any OS
Problem Incorrect error code when C-Clip is unavailable due to corrupted CPP blob
Symptom The SDK may return an incorrect error code (-10036, FP_BLOBIDMISMATCH_ERR) when a
CPP blob is unavailable due to a corrupted fragment and a disk with another fragment that is
offline. The correct error code is -10014, FP_FILE_NOT_STORED_ERR.
Fix Summary
Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1
Configuration
Host OS Any OS
Symptom Importing a pool definition to a cluster in Basic mode that was exported from a cluster in GE or
CE+ mode will fail. Both clusters must run the same configuration to import a pool definition.
Furthermore, importing a pool definition to a cluster without the Advanced Retention
Management (ARM) feature that was exported from a cluster with the ARM feature will fail. Both
clusters must have the ARM feature.
Fix Summary
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0,
4.0.0p1, 4.0.0p2, 4.0.1
Monitoring
Host OS Any OS
Symptom Nodes going on- and offline may fire duplicate alerts with symptom code 4.1.1.1.02.01. These
are all instances of the same problem which EMC service has to follow up.
Fix Summary
Host OS Any OS
Problem The audit log may have entries not related to an event
Symptom The audit log may contain entries such as 'Command {COMMAND} was executed ({result})'.
These messages do not relate to an actual event and can be ignored.
Fix Summary
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0,
4.0.0p1, 4.0.0p2, 4.0.1
Host OS Any OS
Symptom Centera domain names are case sensitive. Management of and presentation of domain names
may cause confusion since CV, CLI, and Console are not consistently case sensitive.
Fix Summary When managing domains use and enter the domains in the same case
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1
Host OS Any OS
Problem CLI command set notification needed when setting the cluster domain
Symptom A cluster domain entered with the CLI command set cluster notification on CentraStar version
3.0.2 and below, is not saved when using Centera Viewer 3.1 or higher. Use the CLI command
set notification instead when setting the cluster domain.
Fix Summary
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0,
4.0.0p1, 4.0.0p2, 4.0.1
Host OS Any OS
Symptom If, in Health Reports and Alerts, more than one recipient is specified with spaces between the
email addresses, none of the addresses may receive email. None of the recipients may receive
Health Reports or Alerts.
Fix Summary Remove any spaces in the list of recipients and only use comma to separate recipients
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1
Host OS Any OS
Symptom Although a "reply to" address for ConnectEMC can be configured by EMC Service in CentraStar
2.4 and 3.0, this address is currently not set in the email message header. Since CentraStar 3.1
it is possible to change the "from" address which will be used as the reply address for emails
sent by ConnectEMC.
Fix Summary
Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1
Host OS Any OS
Problem Nodes in maintenance modes will activate the node fault light
Symptom Nodes in Maintenance Mode will activate the node fault light. However, there is no fault and no
action is necessary.
Fix Summary Check whether the node is in Maintenance Mode by using the nodelist in CenteraViewer.
Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1
Security
Host OS Any OS
Problem When upgrading from CentraStar 3.1 to CentraStar 3.1.2 or higher, the anonymous profile may
be enabled again
Symptom When upgrading from a newly installed cluster running CentraStar 3.1 with anonymous disabled
to CentraStar 3.1.2 or higher, the anonymous profile may have been enabled during the
upgrade. This happens if the profile was never updated.
Fix Summary
Found in Version 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2,
3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1
Host OS Any OS
Symptom With only the hold capability enabled, a set or unset of a litigation on a C-Clip fails with
insufficient capabilities error. Add the write capability to work around this issue.
Fix Summary
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1
Upgrades
Host OS Any OS
Problem No Health Reports being sent after an upgrade to CentraStar version 3.1
Symptom When upgrading from CentraStar version 3.0 or lower to CentraStar version 3.1 the From
address for ConnectEMC may not be set. As a result no Health Reports will be sent if after the
upgrade the notification settings are updated without setting the From address. Make sure that
you set the From address after the upgrade using the CLI command set notification.
Fix Summary Set the From address manually after the upgrade using CLI command set notification
Found in Version 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1
Host OS Any OS
Symptom When a node with the access role goes down or becomes unavailable, it may happen in some
circumstances that the cluster becomes unreachable. This can for example happen during an
upgrade.
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1
Host OS Any OS
Symptom Upgrading may cause read errors at the moment that one of the nodes with the access role is
upgraded. The read errors will disappear after the upgrade.
Fix Summary
Found in Version 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1,
3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3,
3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1
Host OS Any OS
Symptom Upgrading your cluster to version 2.4 SP1 may cause client errors on the application that runs
against the cluster. The following circumstances increase this risk: 1) When the cluster is heavily
loaded. 2) When the application is deleting or purging C-Clips or blobs. 3) When the application
runs on a version 1.2 SDK. Especially when the number of retries
(FP_OPTION_RETRYCOUNT) or the time between the retries (FP_OPTION_RETRYSLEEP) is
the same as the default values or less. When you have a retrycount of 3, set the retrysleep to at
least 20 seconds.
Fix Summary
Found in Version 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2,
2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1,
3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1
Pools
Host OS Any OS
Problem Not all C-Clips are mapped to the pool specified after pool migration
Symptom Pool migration does not take into account regeneration self-healing activity. In limited cases it
may happen that a C-Clip is not mapped to the appropriate pool when a regeneration
self-healing task runs during the pool migration. The C-Clip then remains in the default pool.
Fix Summary
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1
Compatibility
Host OS Any OS
Symptom Profile C-Clips cannot be written to CentraStar version 3.1 or higher with an SDK version older
than 3.1 if the maximum retention period for the cluster is set to anything other than 'infinite'.
Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0,
4.0.0p1, 4.0.0p2, 4.0.1
Data Integrity
Host OS Any OS
Problem On nearly full clusters, it is not possible to delete a C-Clip because there is no room to write the
reflection.
Symptom When many embedded blobs are written to the same C-Clip, CentraStar may have an issue
parsing the C-Clip which in an extreme case could cause the node to reboot.
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1
Self Healing
Host OS Any OS
Symptom The regeneration buffer is by default set to 2 disks per cube or 1 disk per mirrorgroup per cube. If
the cluster is heavily loaded, this could cause the cluster to run out of space when a node goes
down. As a workaround, set the regeneration buffer to 2 disks per mirrorgroup per cube. Use the
CLI command: set capacity regenerationbuffer to set the limit to 2 disks.
Fix Summary Set the regeneration buffer to 2 disks per mirrorgroup per cube.
Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1
Documentation
Host OS Any OS
Problem Access nodes are rebooting when establishing many connections at the same time
Symptom The rate at which new SDK clients connect to a cluster is limited to 5 per minute. Care should be
taken when multiple clients boot up and connect simultaneously. If an excessive number of
connections are established at the same time, the node with the access role may reboot.
Fix Summary Change your client start-up procedure to avoid establishing too many simultaneous connections
at the same time
Found in Version 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2,
2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1,
3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0, 4.0.0p1, 4.0.0p2, 4.0.1
Technical notes
The following table contains details of the currently shipping EMC
Centera Gen4 and Gen4LP hardware. Although other hardware
generations are supported, they are no longer shipped and so are not
presented here. For a list of all compatible EMC Centera hardware for
this release, go to E-Lab NavigatorTM on the EMC Powerlink®
website.
Documentation
To download EMC Centera documentation, go to the EMC
Powerlink® website (registration required) and select Support >
Technical Documentation and Advisories > Hardware/Platforms
Documentation > Centera and expand CentraStar Operating
Systems in the left hand menu.
To view specific documents, expand one of the following folders:
◆ General Reference
◆ Installation/Configuration
◆ Maintenance/Administration
◆ Technical Notes/Troubleshooting
◆ Release Notes
◆ White Papers
Refer to the EMC Centera Quick Start Guide for an overview of all
Centera documentation that is relevant for this release and their
location.
Installation
◆ EMC will either deliver the EMC Centera software pre-installed
on your EMC Centera cluster or an EMC certified engineer will
perform the upgrade on existing clusters.
◆ The EMC Centera Quick Start Guide provides detailed
installation instructions for the CLI and Centera Viewer.
◆ For detailed SDK installation instructions, refer to the SDK release
notes.
For the most up-to-date listing of EMC product names, see EMC Corporation
Trademarks on EMC.com.
All other trademarks used herein are the property of their respective owners.
The EMC Software Development Kit (SDK) contains the intellectual property
of EMC Corporation or is licensed to EMC Corporation from third parties. Use
of this SDK and the intellectual property contained therein is expressly limited
to the terms and conditions of the License Agreement.
The EMC version of Linux, used as the operating system on the EMC Centera
server, uses open source components. The licenses for those components are
found in the Open Source Licenses text file, a copy of which can be found on
the EMC Centera Customer CD.
SKINLF
Bouncy Castle
The Bouncy Castle Crypto package is Copyright © 2000 of The Legion Of The
Bouncy Castle (http://www.bouncycastle.org).
Copyright © 1991-2, RSA Data Security, Inc. Created 1991. All rights reserved.
License to copy and use this software is granted provided that it is identified
as the "RSA Data Security, Inc. MD5 Message-Digest Algorithm" in all
material mentioning or referencing this software or this function. RSA Data
Security, Inc. makes no representations concerning either the merchantability
of this software or the suitability of this software for any particular purpose.
It is provided "as is" without express or implied warranty of any kind.
These notices must be retained in any copies of any part of this documentation
and/or software.
ReiserFS
ReiserFS is hereby licensed under the GNU General Public License version 2.
Further licensing options are available for commercial and/or other interests
directly from Hans Reiser: hans@reiser.to. If you interpret the GPL as not
allowing those additional licensing options, you read it wrongly, and Richard
Stallman agrees with me, when carefully read you can see that those
restrictions on additional terms do not apply to the owner of the copyright,
Finally, nothing in this license shall be interpreted to allow you to fail to fairly
credit me, or to remove my credits, without my permission, unless you are an
end user not redistributing to others. If you have doubts about how to
properly do that, or about what is fair, ask. (Last I spoke with him Richard was
contemplating how best to address the fair crediting issue in the next GPL
version.)
MIT XML Parser software is included. This software includes Copyright (c)
2002,2003, Stefan Haustein, Oberhausen, Rhld., Germany