iDEN Call Procedure
iDEN Call Procedure
iDEN Call Procedure
Notice to Users No part of this publication, or any software included with it, may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, including but not limited to, photocopying, electronic, mechanical, recording or otherwise, without the express prior written permission of the copyright holder. Motorola, Inc. provides this document AS IS without warranty of any kind, either expressed or implied, including but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Motorola reserves the rights to make changes or improvements in the equipment, software, or specifications described in this document at any time without notice. These changes will be incorporated in new releases of this document. Computer Software Copyrights The Motorola and 3rd Party supplied Software (SW) products described in this instruction manual may include copyrighted Motorola and other 3rd Party supplied computer programs stored in semiconductor memories or other media. Laws in the United States and other countries preserve for Motorola and other 3rd Party supplied SW certain exclusive rights for copyrighted computer programs, including the exclusive right to copy or reproduce in any form of the copyrighted computer program. Accordingly, any copyrighted Motorola or other 3rd Party supplied SW computer programs contained in the Motorola products described in this instruction manual may not be copied, reverse engineered, or reproduced in any manner without the express prior written permission of Motorola or the 3rd Party SW supplier. Furthermore, the purchase of Motorola products shall not be deemed to grant, either directly or by implication, estoppel, or otherwise, any license under the copyrights, patents or patent applications of Motorola or other 3rd Party supplied SW, except for the normal non-exclusive, royalty free license to use that arises by operation of law in the sale of a product. Use and Disclosure Restrictions The software described in this document is the property of Motorola, Inc. It is furnished under a duly executed license agreement and may be used and/or disclosed only in accordance with the terms of the said agreement. The software and documentation contained in this publication are copyrighted materials. Making unauthorized copies is prohibited by law. No part of the software or documentation may be reproduced, transmitted, transcribed, stored in a retrieval system, or translated into any language or computer language, in any form or by any means, without the express prior written permission of Motorola, Inc. Trademarks MOTOROLA, the Stylized M Logo, iDEN, and Message Mail are trademarks or registered trademarks of Motorola, Inc. in the United States and other countries. All other product or services mentioned in this document are identified by the trademarks or service marks of their respective companies or organizations, and Motorola, Inc. disclaims any responsibility for specifying their ownership. Any such marks are used in an editorial manner, to the benefit of the owner, with no intention of infringement. While reasonable efforts have been made to assure the accuracy of this document, this document may contain technical or typographical errors or omissions. Motorola, Inc. and its subsidiaries and affiliates disclaim responsibility for any labor, materials, or costs incurred by any person or party as a result of using this document. Motorola, Inc., any of its subsidiaries or affiliates shall not be liable for any damages (including, but not limited to, consequential, indirect, incidental, or special damages or loss of profits or data) even if they were foreseeable and Motorola has been informed of their potential occurrence, arising out of or in connection with this document or its use. Motorola, Inc. reserves the right to make changes without notice to any products or services described herein and reserves the right to make changes from time to time in content of this document and substitute the new document therefor, with no obligation to notify any person or party of such changes or substitutions.
REV 12/15/06
Contact Information Motorola, Inc. Networks business 1501 Shure Dr. Arlington Heights, IL 60004 U.S.A
Table of Contents
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . -ix List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . -xiii
Chapter 1
iDEN Technology Foundation
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2 iDEN System Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3 Subscriber Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4 iDEN System Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5 Wireless Telephony . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5 Two-way Dispatch Radio . . . . . . . . . . . . . . . . . . . . . . . . . 1-5 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-6 Network Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7 iDEN Infrastructure for the MSO . . . . . . . . . . . . . . . . . . . 1-8
Chapter 2
iDEN System Overview
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3 Call and Data Services for Subscribers and Users . . . . 2-3 Delivery Features for Service Providers. . . . . . . . . . . . . 2-4 Network Management Features for Network Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-6 iDEN System Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7 Global. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7 Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7
Table of Contents
Domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7 Urbans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8 Service Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8 Location Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8 Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-9
Chapter 3
iDEN Subsystem Overview
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4 Radio Access Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5 Key Network Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5 Links and Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-6 Call Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7 Interconnect Subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9 Interconnect Architecture . . . . . . . . . . . . . . . . . . . . . . . . 3-9 Key Network Elements . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10 Links and Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-11 Call Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13 Dispatch Subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-17 Links and Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-17 Call Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-19 Packet Data Subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-23 Packet Data Subsystem . . . . . . . . . . . . . . . . . . . . . . . . . 3-23 Key Network Elements . . . . . . . . . . . . . . . . . . . . . . . . . . 3-24 Links and Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-25 Call Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-26 Network Management Subsystem . . . . . . . . . . . . . . . . . . . . 3-29 Network Management Architecture. . . . . . . . . . . . . . . . 3-29 Key Network Elements . . . . . . . . . . . . . . . . . . . . . . . . . . 3-30 Databases and Applications . . . . . . . . . . . . . . . . . . . . . 3-31 Links and Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-32 Network Management Flows . . . . . . . . . . . . . . . . . . . . . 3-33 Other Network Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-35 iDEN Gateway (iGW) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-35
SR 16.0 05-Oct-07
Table of Contents
Chapter 4
New Features in SR16.0
SR16.0 Feature Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . 4-2 iDEN Gateway Dispatch Roaming Registration . . . . . . . 4-2 Private Call on 900 MHz (PC) . . . . . . . . . . . . . . . . . . . . . . 4-2 Selective Dynamic Group Call on Secondary Bands (SDGC) . . . . . . . . . . . . . . . . . . . . . . . . 4-2 Lifetime Audit Registration Renewal Required . . . . . . . 4-3 OMC Password Policy Enhancements . . . . . . . . . . . . . . 4-3 5K Sites in an Urban in Full NGD, Legacy OMC and NGO Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3 OMC Display & Process Enhancements. . . . . . . . . . . . . 4-4 RMA (Retuning Management Application) . . . . . . . . . . . 4-4 EBTS Re-rack with Dual Band RFDS . . . . . . . . . . . . . . . 4-4 Global Positioning System Denial Response . . . . . . . . 4-5 Feature Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-7
Chapter 5
iDEN System Functional Description
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3 Network Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-4 Green Field Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-4 Legacy iDEN System Architecture . . . . . . . . . . . . . . . . . 5-5 Network Links and Protocols . . . . . . . . . . . . . . . . . . . . . . . . . 5-9 iDEN Network Protocols. . . . . . . . . . . . . . . . . . . . . . . . . 5-10 iDEN Protocol Documents . . . . . . . . . . . . . . . . . . . . . . . 5-16 Network Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-17 Interface Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-18 End-to-End Call Flows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-21 Radio Access Call Flow . . . . . . . . . . . . . . . . . . . . . . . . . 5-21 Interconnect Call Flow . . . . . . . . . . . . . . . . . . . . . . . . . . 5-22 Dispatch Call Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-24 Packet Data Call Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . 5-25
Functional Description v
Table of Contents
Chapter 6
Radio Access Network Details
Purpose. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3 Radio Access Network Subsystem Description . . . . . . . . . . 6-4 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-4 Radio Access Network Overview . . . . . . . . . . . . . . . . . . 6-4 Key Network Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-8 Enhanced Base Transceiver System . . . . . . . . . . . . . . . . . . 6-10 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-10 EBTS Hardware Description . . . . . . . . . . . . . . . . . . . . . 6-10 EBTS Software Description . . . . . . . . . . . . . . . . . . . . . . 6-19 EBTS Network Management . . . . . . . . . . . . . . . . . . . . . 6-19
Chapter 7
Interconnect Subsystem Details
Purpose. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-4 Interconnect Subsystem Description . . . . . . . . . . . . . . . . . . 7-5 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-5 Interconnect Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-5 iBSC Functional Description . . . . . . . . . . . . . . . . . . . . . . . . . 7-9 iBSC Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-9 iBSC Hardware Description . . . . . . . . . . . . . . . . . . . . . . 7-12 iBSC Software Description . . . . . . . . . . . . . . . . . . . . . . 7-22 iBSC Network Management . . . . . . . . . . . . . . . . . . . . . . 7-23 Mobile Switching Center Functional Description . . . . . . . . 7-26 MSC Functionality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-26 MSC Hardware Description . . . . . . . . . . . . . . . . . . . . . . 7-28 MSC Software Description . . . . . . . . . . . . . . . . . . . . . . . 7-30 MSC Network Management . . . . . . . . . . . . . . . . . . . . . . 7-30 Mobile Location Center Functional Description . . . . . . . . . 7-31 MLC Hardware Description . . . . . . . . . . . . . . . . . . . . . . 7-31 MLC Software Description . . . . . . . . . . . . . . . . . . . . . . . 7-31 MLC Network Management . . . . . . . . . . . . . . . . . . . . . . 7-32 Interworking Function (IWF) . . . . . . . . . . . . . . . . . . . . . . . . . 7-33 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-33 IWF Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-33 IWF Hardware Description . . . . . . . . . . . . . . . . . . . . . . . 7-34 IWF Software Description . . . . . . . . . . . . . . . . . . . . . . . 7-36
SR 16.0 05-Oct-07
Table of Contents
IWF Network Management . . . . . . . . . . . . . . . . . . . . . . . 7-36 SuperNode Data Manager/Fault Tolerant (SDM/FT) . . . . . . 7-37 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-37 SDM/FT Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-37 SDM/FT Hardware Description . . . . . . . . . . . . . . . . . . . 7-38 SDM/FT Software Description . . . . . . . . . . . . . . . . . . . . 7-39 SDM/FT Network Management . . . . . . . . . . . . . . . . . . . 7-39
Chapter 8
Dispatch Subsystem Details
Purpose. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-6 Next Generation for Dispatch . . . . . . . . . . . . . . . . . . . . . . . . . 8-7 Final NGD Implementation . . . . . . . . . . . . . . . . . . . . . . . . 8-8 DAP Functional Description . . . . . . . . . . . . . . . . . . . . . . . . . 8-10 Dispatch Subscriber Parameters . . . . . . . . . . . . . . . . . 8-11 DAP Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-16 DAP Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-16 Core Architectural Models . . . . . . . . . . . . . . . . . . . . . . . 8-18 HA-DAP Functional Description . . . . . . . . . . . . . . . . . . . . . . 8-22 HA-DAP Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-26 HA-DAP NGD Enhancements . . . . . . . . . . . . . . . . . . . . 8-28 Database Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . 8-31 HA Services Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-32 iVPU Functional Description. . . . . . . . . . . . . . . . . . . . . . . . . 8-34 Software Description 39 iVPU Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-39 iVPU Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-39 Rolling Software Upgrade . . . . . . . . . . . . . . . . . . . . . . . 8-42 HA-iHLR Functional Description . . . . . . . . . . . . . . . . . . . . . 8-44 HA-iHLR Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-48 HA-iHLR NGD Enhancements . . . . . . . . . . . . . . . . . . . . 8-49 HA-iHLR Database ASCII Dump . . . . . . . . . . . . . . . . . . 8-50 iDAC Functional Description . . . . . . . . . . . . . . . . . . . . . . . . 8-53 APD Functional Description . . . . . . . . . . . . . . . . . . . . . . . . . 8-58 iSG Functional Description. . . . . . . . . . . . . . . . . . . . . . . . . . 8-62 MPS Functional Description . . . . . . . . . . . . . . . . . . . . . . . . . 8-68
Table of Contents
Chapter 9
Packet Data Subsystem Details
Purpose. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-3 Packet Data Subsystem Description . . . . . . . . . . . . . . . . . . . 9-4 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-4 Packet Data Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-4 Mobile Data Gateway 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-6 MDG2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-6 MDG2 Hardware Description . . . . . . . . . . . . . . . . . . . . . . 9-7 MDG2 Software Description. . . . . . . . . . . . . . . . . . . . . . 9-11 MDG2 Network Management . . . . . . . . . . . . . . . . . . . . . 9-11 Mobile Data Gateway 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-14 MDG4 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-14 MDG4 Hardware Description . . . . . . . . . . . . . . . . . . . . . 9-16 MDG4 Software Description. . . . . . . . . . . . . . . . . . . . . . 9-19 MDG4 Network Management . . . . . . . . . . . . . . . . . . . . . 9-20 Billing Accumulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-24 BA Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-24 BA Hardware Description. . . . . . . . . . . . . . . . . . . . . . . . 9-25 BA Software Description . . . . . . . . . . . . . . . . . . . . . . . . 9-28 BA Network Management. . . . . . . . . . . . . . . . . . . . . . . . 9-29
Chapter 10
Network Management Subsystem Details
Purpose. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3 OMC-R Functional Description. . . . . . . . . . . . . . . . . . . . . . . 10-4 NGO Functional Description. . . . . . . . . . . . . . . . . . . . . . . . 10-13 EGT4 Functional Description . . . . . . . . . . . . . . . . . . . . . . . 10-20 Cisco WAN Manager Functional Description . . . . . . . . . . 10-23
SR 16.0 05-Oct-07
Table of Contents
Chapter 11
Other Network Elements
Purpose. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-3 iGW Functional Description . . . . . . . . . . . . . . . . . . . . . . . . . 11-4 NDM Functional Description. . . . . . . . . . . . . . . . . . . . . . . . 11-10 DM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-14 ASP EM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-16 NDM PWI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-16 NDM BU Admin Web Interface. . . . . . . . . . . . . . . . . . . 11-16
Functional Description ix
Table of Contents
NOTES...
SR 16.0 05-Oct-07
List of Figures
Figure 1-1 Figure 2-1 Figure 2-2 Figure 3-1 Figure 3-2 Figure 3-3 Figure 3-4 Figure 3-5 Figure 3-6 Figure 3-7 Figure 3-8 Figure 3-9 Figure 3-10 Figure 3-11 Figure 3-12 Figure 3-13 Figure 3-14 Figure 3-15 Figure 3-16 Figure 3-17 Figure 4-1 Figure 5-1 Figure 5-2 Figure 5-3 Figure 5-4 Figure 5-5 Figure 5-6 Figure 5-7 Figure 5-8 Figure 6-1 Figure 6-2 Figure 6-3 Figure 6-4 Figure 6-5 Figure 6-6 Figure 6-7 Figure 6-8 Figure 6-9 Figure 7-1 Figure 7-2
Simplified SR16.0 Components Diagram.......................................................... 1-8 Initial SU System Contact.................................................................................. 2-4 iDEN Network Hierarchical Structure ................................................................ 2-9 Radio Access Network Architecture .................................................................. 3-5 Call Flow with Radio Access Network ............................................................... 3-7 SR16.0 Interconnect Architecture ..................................................................... 3-9 Interconnect Resource Request Process View (Steps 1-8) ............................ 3-14 Interconnect Call Setup Process View (Steps 9-15) ....................................... 3-14 Interconnect Connecting Voice Process View (Steps 16-19).......................... 3-15 Dispatch Group Call Resource Request (Steps 1-4)....................................... 3-20 Dispatch Group Call Paging (Steps 5-9) ......................................................... 3-20 Dispatch Group Call Set up (Step 10) ............................................................. 3-20 Dispatch Group Call Connect Voice (Steps 11-13) ......................................... 3-21 SR16.0 Packet Data Architecture.................................................................... 3-23 Packet Data Call Flow Registration (Steps 1-8) .............................................. 3-27 Network Management (NGO) Architecture ..................................................... 3-29 iCP Status Information Flow (Collection)......................................................... 3-33 iCP Status Information Flow (Report).............................................................. 3-34 iGW Network Flow........................................................................................... 3-36 SIP to S2D Call Flow (Steps 1-8) .................................................................... 3-39 Single EBTS Rack............................................................................................. 4-5 iDEN System Architecture................................................................................. 5-4 Legacy iDEN System Architecture .................................................................... 5-5 Network Protocol and Links Diagram .............................................................. 5-11 Legacy Network Protocol and Links Diagram ................................................. 5-14 Radio Access Call Flow................................................................................... 5-22 Mobile Originated Interconnect Call Setup Step Diagram............................... 5-24 Typical Dispatch Call Summary ...................................................................... 5-26 Packet Data Call Flow ..................................................................................... 5-28 Radio Access Network Architecture ................................................................. 6-5 NGD Configuration ........................................................................................... 6-6 NGD Mixed Mode Configuration ....................................................................... 6-7 EBTS Cabinets................................................................................................ 6-11 Radio Frequency Cabinet................................................................................ 6-12 iSC3 Front Panel ............................................................................................. 6-13 iSC2 - Front and Rear Views........................................................................... 6-15 Duplex RFDS .................................................................................................. 6-17 Cavity Combining RFDS ................................................................................. 6-17 NGD System ..................................................................................................... 7-6 Mixed Mode Configuration ................................................................................ 7-7
Functional Description ix
List of Figures
Figure 7-3 Figure 7-4 Figure 7-5 Figure 7-6 Figure 7-7 Figure 7-8 Figure 7-9 Figure 7-10 Figure 7-11 Figure 7-12 Figure 7-13 Figure 7-14 Figure 7-15 Figure 7-16 Figure 7-17 Figure 7-18 Figure 7-19 Figure 8-1 Figure 8-2 Figure 8-3 Figure 8-4 Figure 8-5 Figure 8-6 Figure 8-7 Figure 8-8 Figure 8-9 Figure 8-10 Figure 8-11 Figure 8-12 Figure 8-13 Figure 8-14 Figure 8-15 Figure 9-1 Figure 9-2 Figure 9-3 Figure 9-4 Figure 9-5 Figure 9-6 Figure 9-7 Figure 9-8 Figure 9-9 Figure 9-10 Figure 10-1 Split Mode Configuration ................................................................................... 7-8 iBSC in the iDEN Network............................................................................... 7-10 iBSC Architecture ............................................................................................ 7-11 iBSC Coverage................................................................................................ 7-11 iBSC Hardware................................................................................................ 7-13 iCP Frame Layout ........................................................................................... 7-14 iCP I/O Shelf.................................................................................................... 7-15 RSL Configurations ......................................................................................... 7-17 iVPU Components........................................................................................... 7-18 iVPU Front Cage ............................................................................................. 7-19 iVPU Rear Cage.............................................................................................. 7-20 iBSC Links and Interfaces ............................................................................... 7-22 MSC in the Interconnect Subsystem ............................................................... 7-27 MSC Components ........................................................................................... 7-28 Interworking Function Services ....................................................................... 7-34 IWF Front View................................................................................................ 7-35 IWF Back View ................................................................................................ 7-35 Mixed Mode Stage One..................................................................................... 8-7 Mixed Mode Stage Two..................................................................................... 8-8 Next Generation Dispatch Network ................................................................... 8-8 R10K DAP Cabinet.......................................................................................... 8-14 R12K DAP Cabinet.......................................................................................... 8-15 HA-DAP Cabinet- Front and Rear View .......................................................... 8-24 RX7620 Server-Front Panel ............................................................................ 8-25 RX7620 Server-Back Panel ............................................................................ 8-26 iVPU Front and Rear Cabinet.......................................................................... 8-37 HA-iHLR Front and Rear Cabinet.................................................................... 8-47 iDEN Dispatch Access Controller - Front View ............................................... 8-55 iDEN Dispatch Access Controller - Rear View ................................................ 8-56 Advanced Packet Duplicator - Front View....................................................... 8-60 iSG 1 and iSG 2 Cabinet ................................................................................. 8-65 MPS Cabinet Lineup ....................................................................................... 8-71 NGD System ..................................................................................................... 9-5 MDG2 - Front View............................................................................................ 9-8 Actual MDG2- Front View................................................................................ 9-10 MDG4 in the iDEN Network............................................................................. 9-15 MDG4 Board Layout........................................................................................ 9-18 MDG4 - Actual Front View............................................................................... 9-19 MGD4 User Interface Menu ............................................................................ 9-21 Billing Accumulator Mirror Function................................................................. 9-25 Billing Accumulator - Front View ..................................................................... 9-26 Back Panel of the Billing Accumulator............................................................. 9-27 Sun Fire 4800- Front View .............................................................................. 10-6
SR 16.0 05-Oct-07
Figure 10-2 Figure 10-3 Figure 10-4 Figure 10-5 Figure 10-6 Figure 11-1 Figure 11-2 Figure 11-3 Figure 11-4
Sun Fire 4800- Rear View ............................................................................... 10-7 Enterprise 3500- Front View............................................................................ 10-8 Enterprise 3500- Review View ........................................................................ 10-9 Netra 440- Front View ................................................................................... 10-15 Netra 440-Rear View..................................................................................... 10-15 iGW in the iDEN Network ................................................................................ 11-5 iGW Cabinet Configuration.............................................................................. 11-7 NDM Connectivity in the iDEN Network ........................................................ 11-11 NDM Cabinet - Front and Rear ..................................................................... 11-13
Functional Description xi
List of Figures
NOTES...
SR 16.0 05-Oct-07
List of Tables
Table 1 Table 2 Table 3 Table 4 Table 5 Table 6 Table 5-1 Table 5-2 Table 5-3 Table 5-4 Table 7-1 Table 7-2 Table 7-3 Table 8-1 Table 8-2 Table 8-3 Table 8-4 Table 8-5
Readers Objectives .......................................................................................... xvii Related Manuals ................................................................................................ xix Recommended iDEN Training........................................................................... xxii Disclosure Table.............................................................................................. xxvii Revision History .............................................................................................. xxviii Acknowledgements .......................................................................................... xxix Key to iDEN System Components Diagrams .................................................... 5-5 Key to Network Protocols and Links Diagram ................................................. 5-12 Key to Legacy Network Protocols and Links Diagram .................................... 5-15 iDEN Protocol Documentation......................................................................... 5-17 iCP Board Functions ....................................................................................... 7-16 Split Mode RSL Card Configuration ................................................................ 7-17 iVPU Board Functions ..................................................................................... 7-20 Mixed Mode and Final NGD Architecture NEs .................................................. 8-9 Statistics Changes for BCCB 862 ................................................................... 8-17 Parameters Added for BCCB 1020 ................................................................. 8-30 Statistics Added for BCCB 1020 ..................................................................... 8-30 iVPU User Account Commands ...................................................................... 8-42
List of Tables
NOTES...
SR 16.0 05-Oct-07
Chapter
xix About this manual Chapter 1 - iDEN Technology Foundation xxv 1-8 1-9
Page(s)
Type of Change
Updated Related Manuals Section Added CMM information Updated figure Replaced: Next Generation With: High Availability Inserted: Application Processer Replaced: NGD With: HA-DAP
2-4
2-6
Inserted: iHLR, Replaced: Internetworking Operating System With: Intelligent Optimization system
Replaced: DSA With: DLA Updated figure Inserted: Control Cabinet Inserted: . The RFDS is part ... Updated figure Inserted: . The IMU and EAS ... Inserted: . The iSC is part ... Inserted: . The BR is part ... Deleted: DCS to/from the ... Updated figure
3-6
3-7
Functional Description xv
3-29 Chapter 4 - New Features in SR 16.0 Chapter 5 - iDEN System Functional Description All 5-4, 5-5 5-11, 5-14 5-18
SR 16.0 05-Oct-07
Deleted: , split backhaul, or ... Deleted: The NGD architecture supports various ... Replaced: Mixed Mode configuration Deleted: AFR is supported in both ... Updated figure
9-8, 9-10, 9-15, 9-18, 9-19, 9-21, 9-25 thru 9-27 Chapter 10 - Network Management Subsystem Details 10-6 thru 10-9 10-13 10-15 10-18
Updated figure
Updated figure Replaced: two With: three Updated figure Inserted: Displays input fields for the Inserted: shelf, site Replaced: hub With: BSC port number, Replaced: card display fields are added With: BSC time slot number Deleted: EBTS screen The X... Replaced: Site With: BSC Replaced: The OMC-R With: Displays Replaced: address is removed With: address, hub Replaced: all sites on a BSC ... With: card fields
Updated figure
SR 16.0 05-Oct-07
The intended content of the manual How the manual is to be used Other reference material that supports information in this manual Elements in the appearance of the manual and their meaning General safety guidelines The changes to the manual over time Who helped produce and validate information in the manual
Purpose
The primary purpose of the iDEN System Overview and Functional Description is to provide general technical background about the iDEN system, key subsystems, and network elements. The information in the iDEN System Overview and Functional Description will expose users to the purpose and parts of the iDEN system and prepare them to locate and use additional technical information.
SR 16.0 05-Oct-07
Table 1
Readers Goals
Basic Knowledge Basic Knowledge Basic Knowledge Basic Knowledge Advanced Knowledge Advanced Knowledge Advanced Knowledge Advanced Knowledge Advanced Knowledge Advanced Knowledge Advanced Knowledge
Chapter 1 iDEN Technology Foundation Chapter 2 iDEN System Overview Chapter 3 iDEN Subsystem Overview Chapter 4 New Features in SR16.0 Chapter 5 iDEN System Functional Description Chapter 6 Radio Access Network Details Chapter 7 Interconnect Subsystem Details Chapter 8 Dispatch Subsystem Details Chapter 9 Packet Data Subsystem Details Chapter 10 Network Management Subsystem Details Chapter 11 Other Network Elements
Audience Profile
The iDEN System Overview and Functional Description is intended for iDEN service providers and iDEN network operators, two key roles associated with an iDEN system.
Service Providers are those interested in the iDEN system from a business perspective. This audience is responsible for seeing that the system generates revenue and meets regulations and service level agreements by delivering calling services to subscribers and users. Network Operators are those involved in the front-line of an iDEN system, e.g., implementation or support. This audience is the technical entity responsible for installing and operating the network infrastructure and links by which the services are provided to subscribers and users.
Procedures for calling CNRC Classification of trouble tickets The escalation processes
The CNRC can be contacted at the following telephone numbers: United States and Canada 1-800- 499-6477 International 1+847-704-9800 Note Toll-free international access codes are available for many locations. Please refer to Appendix E of the Customer Guide to iDEN Customer Network Resolution Center (WP2000-003) for a list of these access codes and dialing instructions.
SR 16.0 05-Oct-07
Related Manuals
The following publications may be required to supplement the information contained in this manual. These references are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com. Table 2 Related Manuals Document Name
OMC-R System Overview OMC-R Configuration Management Users Guide OMC-R System Administrator Guide OMC-R Performance Management OMC-R Online Change Configuration Users Guide OMC-R Configuration Management Parameters OMC-R Event/Alarm Management MSC Planning and Expansion Guide iDEN Packet Data Service Registration And Call Processes EGT4 Alarm Documentation EGT4 Users Manual iDEN System Technical Overview Guide to Motorola Acronyms and Terms EBTS Volume 1 System Installation and Testing EBTS Volume 2 Base Radios EBTS Volume 3 RF Distribution Systems iDEN Integrated Site Controller System Manual iDEN Gen 3 Site Controller System Manual Ethernet Analyzer Detective Software User's Manual Interconnect Troubleshooting Guide iBSC Hardware Manual BSC Troubleshooting Guide BSSC3 Hardware Change Manual
Part Number
68P81001Y90 68P81001Y91 68P81001Y93 68P81001Y95 68P81001Y97 68P81001Y92 68P81001Y94 68P80801A10 68P80801C90 68P81001Y54 68P81001Y53 68P81095E55 68P81131E90 68P80801E35 68P80801E35 68P80801E35 68P81098E05 68P80801E30 68P81001Y55 6881012Y80 68P81001Y77 68P80800H75 6880802F80
Table 2
Part Number
6880802F05 68P81006Y39 68P80801A10 68P80801A20 68P80800E10 68P80802B05 68P80800B95 68P80801C90 68P80802D30 6881012Y82 68P81001Y76 6881001Y49 6881001Y50 68P81001Y83
6881012Y17
68P80802A85 WP04003o
SR 16.0 05-Oct-07
Table 2
Part Number
WP2002-025 WP2003-009 WP05008B WP2003-014 WP2001-010 WP2002-026 WP04008 WP2002-021 WP2000-002 WP2002-023 WP2003-019 WP2003-020 WP2002-022
Recommended Training
To further enhance your understanding of the iDEN system, it is recommended that you complete the following Motorola-offered training courses. Table 3 Recommended iDEN Training Description
iDEN Overview iDEN System School iDEN System Overview OMC Fundamentals Course An Introduction to System Release 16.0 OMC System Administration Course iDEN System Recovery Topics Course iBSC Operations & Maintenance
Course Number
GEN101 GEN102 GEN103 FND106 SRL316 OPM202 ADV401 OPM214
Training Access
To access the iDEN Technical Training Course Catalog on MyNetworkSupport, follow the steps below. TO Access the iDEN Technical Training Course Catalog Action
Login to MyNetworkSupport at: https://mynetworksupport.motorola.com/ The MyQuickLinks screen appears. Scroll down to the section for iDEN technology. On the right side of the page, click on iDEN Training Catalog.
Step 1
2 3
SR 16.0 05-Oct-07
Manuals On-line
This manual is available on the World Wide Web at mynetworksupport, the iDEN customer site. This site was created to provide secure access to critical iDEN Infrastructure information. This web site features a library of iDEN Infrastructure technical documentation such as bulletins, system release documents and product manuals. The documents are located on the secured extranet website at the URL:
https://mynetworksupport.motorola.com
Be sure to include your name, fax or phone number, the complete manual title and part number, the page number where the error is located, and any comments you may have regarding what you have found. Thank you for your time. We appreciate any comments from the users of our manuals.
Conventions
Software
submenu commandsTable > Table Designer new termsmobile station keystrokesCtrl+Alt+Delete, Return mouse clicksclick, double-click user inputType delete screen outputDAP is starting....
Hardware
CD-ROM
Safety
This manual may contain safety notices (alerts). Alerts are based on the standards that apply to graphics on Motorola equipment. Specific procedural notices are stated in the procedures as required and have specific visual representations. The representations are:
DANGER
INDICATES AN IMMINENTLY HAZARDOUS SITUATION WHICH, IF NOT AVOIDED, WILL RESULT IN DEATH OR SERIOUS INJURY.
! !
WARNING
Indicates a potentially hazardous situation which, if not avoided, could result in death or serious injury.
CAUTION
Indicates a potentially hazardous situation which, if not avoided, could result in minor or moderate injury.
CAUTION
Without the alert symbol indicates a potentially hazardous situation which, if not avoided, may result in property damage.
SR 16.0 05-Oct-07
Important
Indicates an item of the essence of a topic that is indispensable. Note Indicates something of notable worth or consequence.
General Safety
Important Remember Safety depends on you!! General safety precautions must be observed during all phases of operation, service, and repair of the equipment described in this manual. Failure to comply with these precautions or with specific warnings elsewhere in this manual violates safety standards of design, manufacture, and intended use of the equipment. You must heed the safety precautions and warnings listed in the product manuals for your equipment. Any individual using or maintaining the product(s), should follow these warnings and all other safety precautions necessary for the safe operation of the equipment in your operating environment. Motorola, Inc. assumes no liability for failure to comply with these requirements.
Logo 1 means that the product contains no substances in excess of the maximum concentration value for materials identified in the China Management Methods regulation. Logo 2 means that the product may contain substances in excess of the maximum concentration value for materials identified in the China Management Methods regulation, and has an Environmental Friendly Use Period (EFUP) in years, fifty years in the example shown.
The Environmental Friendly Use Period (EFUP) is the period (in years) during which the Toxic and Hazardous Substances (T&HS) contained in the Electronic Information Product (EIP) will not leak or mutate causing environmental pollution or bodily injury from the use of the EIP. The EFUP indicated by the Logo 2 label applies to a product and all its parts. Certain fieldreplaceable parts, such as battery modules, can have a different EFUP and are marked separately. The Disclosure Table (shown on the following page) is intended only to communicate compliance with China requirements; it is not intended to communicate compliance with EU RoHS or any other environmental requirements.
SR 16.0 05-Oct-07
Table 4
Disclosure Table
Revision History
The following table contains a list of the revisions made to this guide. Table 5 Revision History
Date
02/26/06 10/04/07
Issue
O A
Description of Changes
Original Publication for SR15.0 Updated for SR16.0
SR 16.0 05-Oct-07
Acknowledgements
We recognize and thank the following contributors. Table 6 Acknowledgements Name
Ann Kohut Mark Interrante Elayne Tillman Todd Mueller Carlos Carrillo Sheila Raines Gabriel Bovino John Mabbott Patti Rausch Steve Hart Gil Scovill Thomas Logsdon Bryan Blanco
Name
NOTES...
SR 16.0 05-Oct-07
Chapter 1
iDEN Technology Foundation
In This Chapter Topic See Page
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2 iDEN System Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3 Subscriber Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4 iDEN System Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5 Wireless Telephony . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5 Two-way Dispatch Radio . . . . . . . . . . . . . . . . . . . . . . . . . 1-5 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-6 Network Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7 iDEN Infrastructure for the MSO . . . . . . . . . . . . . . . . . . . 1-8
Interconnect ................................................................................... 1-8 Dispatch ......................................................................................... 1-9 Packet Data.................................................................................... 1-9 Network Management .................................................................. 1-10
Introduction
The outgrowth of the wireless communications market has produced vast opportunities to enhance and improve the communications between individuals. Because of this growth, the need to optimize Radio Frequency (RF) spectral resources and provide ever-increasing services is essential. To support the increase in wireless services while making best use of available resources, the iDEN system capitalizes on the fact that communications can be:
Half-duplexwhere one user is transmitting (talking) and other users are receiving (listening) Full-duplexwhere there is an open bi-directional link that allows full twoway communication
Many times communication does not require a full-duplex link. Messaging, paging, some forms of data communication, and structured voice communication are, or can effectively operate in, half-duplex mode. Traditional telephone conversations and more intensive data links require the ability to interrupt; that requires full-duplex operation. By utilizing both full and half-duplex operations, iDEN allows much of the voice traffic to be run in half-duplex mode, while providing full-duplex functionality when required. As part of the ongoing effort to support the outgrowth in wireless communication, the iDEN system is an integration of traditional Push-To-Talk (PTT), half-duplex, analog radio technology, and feature-rich, full-duplex digital cellular communications. This integration of mobile communication technologies provides state-of-the-art functions and benefits to mobile users while optimizing the available infrastructure resources.
SR 16.0 05-Oct-07
At the highest-level, the iDEN system is the infrastructure that is used to deliver services to mobile subscribers (MS) using a single subscriber unit (SU). This subsection briefly describes the subscriber/user services available from the iDEN system and the network infrastructure that delivers these services. Note SU is also referred to sometimes as Mobile Station. For clarity purposes, in this manual the term SU is used, not Mobile Station (MS).
Subscriber Services
The iDEN system provides services that include: two-way radio communication, cellular/wireless telephone communication, and wireless data services. The iDEN network provides the customer with a complete solution to all of their wireless needs, using a single SU.
The iDEN network converts analog voice to digital code for transmission over the air. iDEN uses TDMA technology which separates RF channels into time slots, allowing multiple users to share each RF channel by transmitting at different times. The iDEN SU continually measures signal quality to maintain strong signal reception.
SR 16.0 05-Oct-07
Wireless Telephony
To convert an analog voice signal to a digital signal, the wave pattern is sampled at different points and a digital representation of 1s and 0s is given to the sample at each point. This digital representation can be stored, replicated, or transmitted without further degradation. The SU performs the conversion of analog voice into digital code using two voice encoding, or vocoding, techniques called VSELP and AMBE++. These techniques also compress large segments of voice into smaller segments for more efficient transmission of the voice data. The iDEN system uses Time Division Multiple Access (TDMA). TDMA divides each 25 kilohertz radio frequency, or RF channel into 6 time slots, allowing 6 communication paths on a single RF channel. Multiple users are assigned the same channel frequency and share the RF channel by transmitting at different times. The sharing of RF channels enables TDMA technology to be more efficient in its utilization of resources. iDEN maintains strong signal reception by continually measuring the signal quality of the serving cell and all of the surrounding neighbor cells. If the signal quality drops below an acceptable level, the SU moves to a site that offers a better signal, and thus be assigned a new RF channel and time slot for transmission.
Data
There are three categories of data services supported by the iDEN network. They are as follows:
Short Message Service (SMS)This service is commonly referred to as text messaging. Circuit DataCircuit data enables the subscriber to send and receive faxes and email, connect to online services such as a web service provider, and to access databases. Packet DataPacket Data provides the subscriber access to Intranet and Internet applications using standard Internet Protocols.
SR 16.0 05-Oct-07
Network Infrastructure
The network infrastructure of the iDEN system offers a variety of services and features that provide subscribers with the latest integrated wireless communications solutions. These services and features are divided into three categories:
InterconnectThe Interconnect subsystem allows a SU to travel freely throughout an iDEN system and communicate with other subscribers and land line systems. Within this extended network, a user can originate or receive Interconnect calls, and if so provisioned, access data and voice mail services. DispatchDispatch calls are voice communications that use the halfduplex Push-To-Talk (PTT) form of communication. This type of call does not use other provider networks except as transport facilities. Packet DataPacket Data in iDEN, is a non-voice communication interface that connects the SU directly with the Intranet, Internet, Virtual Private Network (VPN), and the Extranet. Packet Data networking allows the service provider to become a point-of-presence for mobile users on the Internet.
Note
The EBTS is not part of the MSO. See iDEN Subsystem Overview on page 3-1 for more information for all iDEN subsystems.
Interconnect
The Interconnect subsystem allows a SU to travel freely throughout an iDEN system and communicate with other subscribers and land line systems. Within this extended network, a user can originate or receive Interconnect calls, and if so provisioned, access data and voice mail services. The major components of the Interconnect subsystem are as follows:
iDEN Base Site Controller (iBSC)routes Interconnect packets between the EBTS and the MSC. Mobile Switching Center (MSC)determines Interconnect services (iHLR) and SU location information (VLR) and also controls/routes Interconnect calls within the network and to other service providers. Short Message Service (SMS)provides text message services.
SR 16.0 05-Oct-07
Voice Mail Server (VMS)serves as an answering machine and stores messages for the SU. VMS also indicates when voice mail is present. Supernode Data Manager-Fault Tolerant (SDM/FT)allows law enforcement agencies to obtain call data records as well as intercept audio in an Interconnect phone call for court authorized monitoring. This is part of the Call Intercept System (CIS).
Dispatch
Dispatch calls are voice communications that use the half-duplex Push-ToTalk (PTT) form of communication. This type of call does not use other provider networks except as transport facilities. The major components of the Dispatch subsystem are as follows:
High Availability Dispatch Application Processor (HA-DAP)The HADAP is responsible for the overall control and coordination of Dispatch and Packet Data services. iDEN Home Location Register (HA-iHLR)The HA-iHLR database for Dispatch and Packet Data services. Network Dispatch Messenger (NDM)The NDM allows a direct connection between your PC and the iDEN phone. (optional) Enhanced Global Title Translation Table Tool (EGT4)The EGT4 is a centrally located workstation. It provides the following information to all HA-DAPs and HA-iHLRs in the network:
International Mobile Station Identifier (IMSI) ranges associated with a particular HA-iHLR IP addresses of all HA-iHLRs and D-VLRs
iDEN Surveillance Gateway (iSG)The iSG provides law enforcement surveillance capability for the iDEN Dispatch and Packet Data subsystems.
Packet Data
Packet Data (PD) in iDEN is a non-voice communication interface that connects the SU directly with the Intranet, Internet, Virtual Private Network (VPN), and the Extranet. Packet Data networking allows the service provider to become a point-of-presence for mobile users on the Internet. The major components of the Packet Data subsystem are as follows:
Mobile Data Gateway (MDG)The MDG interface to the Internet. It also converts the iDEN protocol to the Internet Protocol and vice versa. The MDG also functions as the Foreign Agent (FA). Home Agent (HA)The HA routes packets from the Internet to the correct MDG for delivery to the SU. Billing Accumulator (BA)The main functions of the BA are as follows:
Collect and store billing records that are produced by the MDG.
Provide an avenue for the stored billing records to be transferred to an outside billing center There are two BAs in the Packet Data system. One is the primary and one is the secondary.
Authentication, Authorization, and Accounting (AAA) ServerThe AAA Server is a carrier provided network element that is used for Packet Data authentication. The job of the AAA Server is to provide the following:
Subscriber authenticationVerifies and identifies the SU. Subscriber authorizationSU level of service (what it can and cannot do). Subscriber accountingTracks services, session times, etc. Billing Accumulator (BA)Collects Packet Data billing information.
Network Management
The Next Generation OMC-R (NGO) is the network management subsystem that establishes, maintains, collects information about the network, and presents it to the system operator. This data is used primarily to support the daily operation of the network radio system elements and to provide the system operator with valid information for future planning decisions. The major components of the NGO subsystem are as follows:
OMC-Restablishes, maintains, collects information about the network, and presents it to the system operator. Backup Server provides automated tape backups for the urban. Only one backup server is supported per urban. E0 LANsupports tape backup activities.
SR 16.0 05-Oct-07
Chapter 2
iDEN System Overview
In This Chapter Topic See Page
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3 Call and Data Services for Subscribers and Users . . . . 2-3 Delivery Features for Service Providers. . . . . . . . . . . . . 2-4
Over-the-Air-Programming ............................................................. 2-4 Initial Registration........................................................................... 2-4 Call Monitoring/Surveillance........................................................... 2-4 Billing.............................................................................................. 2-5 Horizontal Networking .................................................................... 2-5 Subscriber Databases .................................................................... 2-5 Interfaces for Customer-provided Administrative Data Platforms .. 2-5 Encryption Support for Dispatch..................................................... 2-5
iDEN System Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7 Global. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7 Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7 Domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7 Urbans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8 Service Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8 Location Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8 Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-9
SR 16.0 05-Oct-07
Introduction
This chapter introduces the features and organization of the iDEN system from the viewpoints of service providers and network operators.
Subscriber/UserThe subscribers and users of the iDEN system are those who utilize the calling services. Service ProviderThe service providers of the iDEN system are the business entity responsible for generating revenue and meeting regulations and service level agreements by delivering calling services to subscribers and users. Network OperatorThe network operators of the iDEN system are the technical entity responsible for installing and operating the network infrastructure and links by which the services are provided to subscribers and users.
Interconnect provides basic mobile-mobile, land-mobile, mobile-land, as well as supplementary and adjunct services, such as call forwarding, call waiting, voice mail, and caller ID. Dispatch provides private call, talk group call, horizontal networking, and data services, such as SMS, and dial-up laptop. Packet Data (including WiDEN) provides mobile IP services such as Internet enabled phones.
Initial Registration
Before an SU can obtain service, it must be activated and registered in the system. Service activation requires that International Mobile Equipment Identifier (SIM ID), basic device parameters, and the services definition be entered in the Home Location Register (HLR) on the home Mobile Switching Center (MSC), and a home DAP. The data entry requirements are shown in Figure 2-1 below. Figure 2-1 Initial SU System Contact
The iDEN system is in compliance with the Communications Assistance for Law Enforcement Act (CALEA). Through Interconnect and Dispatch calls, law enforcement agencies can monitor the communications of a selected SU(s). Note Before a carrier can begin call surveillance, a warrant must be presented. The warrant authorizes a Law Enforcement Agency (LEA) to conduct wiretap surveillance on a particular individual or group of individuals.
SR 16.0 05-Oct-07
Billing
The Billing and Administration (BA) equipment is supplied by the customer. Business operations are solely the responsibility of the service provider. The iDEN system supplies usage and performance data for business operations and decision support. Note The policies, procedures, and mechanisms for the exchange of data, administrative, and operational control of the billing information is the responsibility of the service provider.
Horizontal Networking
Horizontal Networking (HN) in Dispatch, provides the capability for individual Dispatch services between DAPs in a home urban area or any urban area, and receives Private Call (PC), Call Alert (CA), and SU status.
Subscriber Databases
The High Availability iDEN Home Location Register (HA-iHLR) provides a permanent database of all subscribers registered in the system. The subscriber profile includes: the types of Dispatch calls individual subscribers are allowed to make, the fleet assignments, talkgroup, and individual subscriber identification numbers. The HA-iHLR also stores the subscriber databases for wireless data services.
iDEN supports interfaces to a customer supplied Administrative Data Center (ADC). The ADC is the service providers administrative and business control equipment. The ADC may be used to enter/provision user information and accepts system metrics and call records.
This feature allows the Encryption of a Private Call (EPC) without a key exchange with the system infrastructure. The Dispatch encryption feature has various market applications; however, primary use focuses on the ability to make a Dispatch call to another capable subscriber using a sophisticated layer of eavesdropping protection.
Performance Management
The performance management function controls the collection and presentation of metrics to the system operator.
The On-Line Configuration Change (OLCC) feature improves operability and availability by significantly reducing system down time. The OLCC feature accomplishes this by the following:
Allowing network operators to change the iDEN system configuration while the system continues to process calls. Providing the capability to change the iDEN radio system configuration while network elements are online processing calls, thereby reducing the overall scheduled configuration change outage time for the iDEN radio network. Providing the mechanism to change the allocated system parameters without re-initializing the network element off-line.
Configuration Management
The iDEN configuration management monitors and controls the system and subsystem components for state, software release, and components. iDEN configuration management relates to the following areas:
The system configuration databases of the HA-DAP, EBTS, iBSC, iHLR, and MDG are downloaded from the OMC-R. These databases change as the physical configuration of the network expands to accommodate growth.
The Intelligent Optimization system (IOS) import / export tool is designed to help the AirGen user import carrier/frequency usage information from the currently running workspace in AirGen.
SR 16.0 05-Oct-07
Global
Global refers to the worldwide, multi-provider, Public Telephone Switched Network (PSTN). The telephone services available worldwide may be accessed from the iDEN system using Interconnect calling. When the iDEN system uses the PSTN (Interconnect) system, the rules and procedures of the PSTN are used. Interconnect calling is the access to land-line systems and the services available and emerging in the public switched environment.
Region
A region is a large geographic area that is usually associated with more than one market or more than one urban area. Just as urban areas tend to be adjacent and overlap, so do service provider coverage areas. A single service provider supplies services by regions. A service provider may have adjacent or overlapping markets that can be linked to provide contiguous service across the areas. Regions are covered by Interconnect and Dispatch calling.
Domain
A domain is an optional logical division of a region. This is usually defined by marketing and sales strategies as a fleet-based geographical area. The intent is to subdivide the region into smaller areas based on expected usage patterns. Domain 0 is a system-wide domain. The default for Domain 0 is all service areas. There can be up to 50 domains assigned per region. Each domain may contain up to 254 service areas and represents the area of operation for a fleet or fleets. The domain rules are as follows:
Should be geographically contiguous and should not intertwine or overlap. Should conform to obvious geographic - demographic borders. Should have borders in areas of zero or few subscribers. Should make use of natural obstacles that constrain subscriber movement.
To minimize movement of subscribers between domains, domain borders should avoid major roads, highways, and thoroughfares. Obstacles like rivers, mountain ranges, and diverse geographic areas separated by a lack of contiguous RF coverage are good boundaries. Users within a domain tend to stay within its borders. A business' fleet will normally stay within a domain throughout the course of the work day.
Urbans
The iDEN system is divided up into large geographical areas known as urbans. In most cases, an urban is centered on a large city such as New York, Los Angeles, or Chicago. An urban can also span more than one city or more than one state. Urbans are sometimes referred to as markets. See iDEN Network Structure on page 2-9.
Service Area
A service area is Dispatch-only and is defined in provisioning by the subscriber and user databases. A service area is a group of Dispatch Location Areas (DLA) that define a range of operation. Multiple service areas may be defined. Service areas may overlap. Users in a service area tend to stay within its borders.
Location Area
A location area is a logical geographical area that is based on the current location of a SU. Each SU contains Location Area Identifiers (LAIs) used by the location area to locate the SU for paging and call setup. Note The Interconnect location areas are usually larger than Dispatch location areas. A Dispatch Location Area (DLA) is a geographic area comprised of one or more EBTS sites that manages and tracks the most recent location of a SU. The DLA defines the area where the Dispatch SU is currently located for paging and call setup. Since Dispatch call traffic is typically heavy, a DLA may be smaller than a Interconnect Location Area (ILA).
SR 16.0 05-Oct-07
Cell
A cell is defined as a uniquely identified coverage area whose function is influenced by the following variables:
The environment (buildings, trees, terrain) Antenna pattern Tower height The transmit power of the serving EBTS
An EBTS may be configured to support one or more cells. A site with one cell is known as an omni-directional site. A site with two or more cells is known as a sectored site. Sectored sites typically have three cells. While the SU is served by a single cell at any given time, it monitors the quality of other cells to determine when a change is required. This mobility function is discussed further in subsequent sections of this manual.
Urbanslarge geographical areas, often referred to as markets. Fleetsthe broad logical group of users based on a common bond. Talkgroupsa subset of a fleet based on the inter-relationship of users in the fleet (managers, sales, transportation, drivers, etc.). Subscriber Units (SU)the radio equipment a mobile subscriber uses to access the iDEN system. iDEN Network Hierarchical Structure
Figure 2-2
The iDEN network is divided into Urbans, and each urban is made up of hundreds of fleets. Each Fleet is assigned a unique fleet ID. Each individual subscriber in the iDEN network is a member of a fleet. Each fleet is divided into Talkgroups. Talkgroups allow groups of users to communicate with multiple subscribers at the same time.
SR 16.0 05-Oct-07
Chapter 3
iDEN Subsystem Overview
In This Chapter Topic See Page
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5 Radio Access Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-6 Key Network Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-6
EBTS .............................................................................................. 3-6 RFDS ............................................................................................. 3-6 IMU/EAS/EAS2 .............................................................................. 3-7 iSC (ACG) ...................................................................................... 3-7 BR .................................................................................................. 3-7 Power Cabinet................................................................................ 3-7 DCS................................................................................................ 3-7
Call Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-8 Interconnect Subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10 Interconnect Architecture . . . . . . . . . . . . . . . . . . . . . . . 3-10 Key Network Elements . . . . . . . . . . . . . . . . . . . . . . . . . . 3-11
iBSC ............................................................................................. 3-11 MSC ............................................................................................. 3-11 SMS ............................................................................................. 3-11 MLC.............................................................................................. 3-11 HLR .............................................................................................. 3-11 VMS ............................................................................................. 3-11 IVR ............................................................................................... 3-11
Call Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-14 Dispatch Subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-18 Links and Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-18
iVPUd/iVPUdi ............................................................................... 3-19 HA-DAP........................................................................................ 3-19 HA-iHLR ....................................................................................... 3-19 EGT4 ............................................................................................ 3-19 APD .............................................................................................. 3-19 iDAC ............................................................................................. 3-19 iSG ............................................................................................... 3-20
Call Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-20 Packet Data Subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-24 Packet Data Subsystem . . . . . . . . . . . . . . . . . . . . . . . . . 3-24 Key Network Elements . . . . . . . . . . . . . . . . . . . . . . . . . . 3-25
Mobile Data Gateway ................................................................... 3-25 Mobile Data Gateway 4 ................................................................ 3-25 Billing Accumulator....................................................................... 3-25 Authentication, Authorization, and Accounting Server ................. 3-26 Home Agent ................................................................................. 3-26
SR 16.0 05-Oct-07
Introduction
This chapter describes the major subsystems in the iDEN system in terms of their key equipment, software and databases, and function in providing system services such as calls.
SR 16.0 05-Oct-07
EBTS
The EBTS consists of the control cabinets that house the components. Control Cabinet Power Cabinet RF Cabinet
RFDS
The Radio Frequency Distribution System (RFDS) is an 800 MHz or 900 MHz system that connects the base radios to the RF antennas. The RFDS is part of the RF Cabinet.
IMU/EAS/EAS2
Both the iMU and EAS receive alarm signals from various EBTS site equipment. The IMU and EAS are part of the Control Cabinet.
iSC (ACG)
The Integrated Site Controller (iSC) performs all control and synchronization functions for the EBTS. The iSC is part of the Control Cabinet.
BR
The Base Radio (BR) is responsible for control signaling and user voice/data. The BR is the RF interface of the EBTS site and transmits and receives in either the 800 MHz or 900 MHz iDEN frequency bands. The BR is part of the RF Cabinet.
Power Cabinet
The power cabinet typically consists of the power supply cabinet and back-up batteries and supplies negative 48 DC voltage for powering the control and RF cabinets and for charging the batteries.
DCS
The Digital Cross Connect Switch (DCS) is a standard, programmable telephone industry device that provides a transport system within the iDEN network. It is the central connecting point for ALL T1 and E1 links in the iDEN network and is used to separate the information on the T1 or E1 links and route the data to the correct network element.
3
EBTS
EBTS to/from the SU is via RF Carrier. EBTS to/from the DCS is via T1/E1 links using a DS0. Ethernet 10Base2 connects the BR to the integrated Site Controller (iSC) carrying VSELP/AMBE++ traffic, packet and circuit data traffic, the Operations and Maintenance Link (OML), and MOBIS signaling. iSC to/from the MSC via a T1/E1 connection to transports system information, call control, coded voice, and data traffic.
SR 16.0 05-Oct-07
DCS
DCS to/from the BSC/iBSC Interconnect paths and control signaling are routed from the EBTS spans to the BSC/iBSC.
DCS to/from the EBTS is via T1/E1 links using a DS0. EIA232 provides service access as administrative links for user interface purposes.
3
Call Flow
The Radio Access Network has a role in both Interconnect and Dispatch call scenarios. The high-level steps for a Mobile-to-Mobile call that are impacted by the Radio Access Network components only are provided here. Note iDEN training classes offer additional information on call flows for the Radio Access Network system. Figure 3-2 displays the processes and related call flow steps listed in the TO Setup a Call Flow with Radio Access Network on page 3-8. Figure 3-2 Call Flow with Radio Access Network
TO
Step 1
b) EBTS assigns a channel in the cell where the SU is located. c) The MSC signals to the BSC/IBSC which sends the EBTS a
channel assignment request (DCCH). d) The EBTS assigns a channel that will be used to complete call setup and to carry voice traffic.
The MSC to target SU setup follows the same process as the Landto-Mobile call with the exception of not connecting from the PSTN. a) After a land call is initiated, the MSC sends instructions to the BSC/iBSC to page the target SU. The BSC/iBSC then forwards the page request to all EBTSs in the location area. The EBTSs send pages for all cells in the location area identified by the VLR for the SU. b) Receipt of page prompts SU to request a channel from the EBTS. c) The EBTS assigns a new channel in the cell where SU is located. d) The MSC sets up the connection to the BSC/iBSC and EBTS for voice traffic. e) MSC gives channel assignment request to BSC/iBSC which gives it to the EBTS. The EBTS assigns a channel that will be used to complete the call setup and to carry voice traffic. The voice connection for the originating SU goes from the originating SU to the MSC. The originating SU sends a connect acknowledge message back to the MSC. The voice connection for the target SU goes from the MSC to the target SU. At this point the MSC opens the voice channel between the PSTN and the SU, and conversation takes place.
SR 16.0 05-Oct-07
Interconnect Subsystem
The Interconnect subsystem tracks the Subscriber Units (SU) general location area and authorizes services. It makes connections between the wireless subsystem and the land telephony network and routes Interconnect calls. Interconnect calling allows a SU to travel freely throughout an iDEN system and teamed roaming partners. Within this extended network, a user can originate or receive Interconnect calls. The system tracks the SUs location and services. It also routes the calls. Using networked regions, the SU can roam into those Interconnected regions originating and receiving calls as if the SU is in its home system.
Interconnect Architecture
This architecture represents the Interconnect system for SR16.0. Figure 3-3 SR16.0 Interconnect Architecture
iBSC
The iDEN Base Site Controller (iBSC) performs call processing, operations and maintenance, and routes Interconnect packets between the EBTS and the MSC via the iDEN Call Processor (iCP) and iDEN Vocoder (iVPU)
MSC
The Mobile Switching Center (MSC) is a Nortel DMS100 switch that determines Interconnect services and location information and also controls and routes the calls to other providers.
SMS
The Short Message Service provides text messaging services for short messages (up to 140 characters in length) to the SU.
MLC
Mobile Location Center (MLC) is a central office platform that determines the physical location of a handset (SU) in a cellular telephone network.
HLR
The Home Location Register (HLR) is the location (database) where the SU permanent subscriber records are stored.
VMS
The Voice Message Service (VMS) is not iDEN-specific equipment and serves as an answering machine and stores messages for the SU, indicating when voice mail is present.
IVR
The Interactive Voice Response (IVR) is not iDEN-specific equipment that allow SUs to communicate directly with a computer using recorded voice prompts and text-to-speech.
SDM/FT
The SuperNode Data Manager/Fault Tolerant (SDM/FT), also known as SDM, manages data formatting and provides a point of access to the switch, allowing Interconnect surveillance, customer provisioning, and billing.
IWF
The Interworking Function (IWF) provides circuit switched data services to provide wireless modem and fax data capabilities.
SR 16.0 05-Oct-07
MOBIS is used for call set-up and tear down to and from the radios. SS7 is used for call set-up and tear down to and from the switch and PSTN. SNMP is used for operational management to and from the EBTS sites and the OMC-R. Coded voice to and from the radios and PCM to and from the switch is sent over wire from the radios and over fiber to the MSC. Links between the call processors and voice processors support control and maintenance functions that are carried over wire or are delivered in packets on a LAN.
iBSC
The links for the iBSC are dependent on the component. The specific links are as follows: iCP
iCP to/from the EBTS via DCS is on T1/E1 links carrying MOBIS and SNMP channels. iCP to/from the MSC is via T1/E1 links using SS7 signaling.
iVPU
iVPUi to/from the MSC is via OC3 using SS7. iVPUi to /from the EBTS via DCS is on OC3 links using SNMP and MOBIS channels. An Ethernet (Internet Protocol) supports control and maintenance functions that are carried over wire or delivered on a LAN.
MSC
MSC to/from the BSC via the DCS uses T1/E1 connected by PCM voice on wire or OC3. MSC to/from the PSTN is via T1/E1 using PCM or OC3. MSC to/from SMS and HLR is via V.35 using SS7 signaling. Ethernet (Internet Protocol) provides input/output of customer provisioning, billing, surveillance and set-up of IWF for circuit data. EIA232 provides Service Access.
MLC
HLR
The HLR is connected to the MSC via a V.35 link using SS7 signaling.
VMS
VMS to/from the MSC is via V.35 using SS7 signaling. VMS to/from the SMS is via an ethernet link using a TCP/IP SMPP connection. EIA232 provides service access.
IVR
SDM/FT
SDM/FT to/from the MSC is via a DS512. SDM/FT to/from the Call Intercept Provisioning Center (CIPC) and Law Enforcement Monitor (LEM) is via an ethernet link to set up the MSC for surveillance.
Legacy IWF
IWF from the MSC is via T1/E1 using serial data. An Ethernet connection supports IWF to/from the MSC. EIA232 provides service access.
SR 16.0 05-Oct-07
Legacy BSC
The links for the Legacy BSC are dependent on the component. The specific links are as follows: CP
BSC-CP to/from the EBTS via DCS is on T1/E1 links carrying MOBIS and SNMP channels. BSC-CP to/from the MSC is via T1/E1 links using SS7 signaling. BSC-CP to/from the OMC-R is via T1/E1 links using X.25.
XCDR
BSC-XCDR to/from the MSC is via T1/E1 carrying user voice (PCM). BSC-XCDR to /from EBTS via DCS is on T1/E1 links using VSELP/ AMBE+.
3
Call Flow
A typical Interconnect Mobile-to-Land call is comprised of three processes. They are as follows:
Resource Request Call Setup Connecting Voice The iDEN training classes offer additional information on call flows for the Interconnect system.
Note
The following figures display the processes and related call flow steps listed in the TO Setup the Interconnect Call on page 3-15 table.
Figure 3-4
Figure 3-5
SR 16.0 05-Oct-07
Figure 3-6
TO
If a channel is not available, the service request is not sent and the SU user receives an alert to indicate all channels are busy. If the channel is successfully assigned, the service request containing the ID of the SU and type of service being requested is sent to the MSC.
Set Mobile BusyThe MSC sets the SU to busy to prevent allocation of radio resources for other calls intended for this SU, and for administration of supplementary services. Access RequestThe MSC looks up the current SU subscriber information in the VLR to make sure the SU is authorized for the type of service being requested. Authentication RequestAs an option, authentication takes place between the VLR and the SU. Authentication ResponseThe SU sends an Authentication response back to the VLR (DCCH). Service AcceptedIf authentication is successful, the VLR sends a service accepted message to the SU (DCCH). Setup Call InformationThe SU sends the dialed digits (telephone number) of the target telephone to the MSC (DCCH). Call ProceedingThe MSC sends a call proceeding message to the SU. This message tells the SU that the MSC will not accept any further call setup information and that it is proceeding to set the call up with the information it has (DCCH).
5 6 7 8 9 10
TO
14
15 16 17 18 19
SR 16.0 05-Oct-07
Dispatch Subsystem
The Dispatch subsystem is the group of network elements and links that are actively involved in providing two-way Dispatch services. The Dispatch subsystem elements provide wireline transmission, call processing, switching, and the signaling needed for Dispatch call set up and control.
Hyper Text Transfer Protocol Secure (HTTPS), Secure Shell, and Secure FTP Virtual Router Redundancy Protocol (VRRP) With the NGD architecture, static routes are used instead of OSPF, to reduce and simplify the configuration requirements. Since static routes are utilized, the Virtual Router Redundancy Protocol (VRRP) is required to support high availability on the iVPU.
Hot Standby Router Protocol (HSRP) Hot Standby Routing Protocol (HSRP) offers a failover mechanism for pairs of routers within a LAN. Described in RFC 2281, HSRP introduces the idea of a virtual router, complete with a virtual IP address and a virtual MAC address.
Session Description Protocol (SDP) The iGW supports Selective Dynamic Group Calls (SDGC) between a 3G network and the iDEN network. The Voice Signaling Gateway (VSGW) converts 3G SIP signaling messages to iDEN proprietary signaling, and sends a page request to the appropriate DAP when a Session Initiation Protocol (SIP) invite request is received. The invite includes the group identifier or group list, and the SDP offer. The SDP offer includes call details such as vocoder, and User Datagram Protocol (UPD) port number. The transcoders convert 3G Voice Over IP (VoIP) packets to VSELP or AMBE++, and convert Real Time Transport Protocol (RTP) to the iDEN version of RTP (iRTP).
iVPUd/iVPUdi
HA-DAP
Since the DAP is the Dispatch and Packet Data call manager/router, it has several interface types:
EBTS sites (via the DCS using T1/E1) APDs (High Speed Serial Interface HSSI) iDAC (High Speed Serial Interface HSSI) MPS (V.35) OMC-R iHLR
HA-iHLR
Internally, the HA-iHLR communicates between 2 nodes across a private dedicated LAN. External communications are supported by two types of links:
The EIA-232 interface is used to communicate Man-Machine Interface (MMI) commands to and from the Local Maintenance Terminal (LMT). The two Ethernet links support communication from each of the nodes to the Central LAN. The iHLR communicates via the Central LAN with the OMC-R, the provisioning client, and the DAPs.
EGT4
APD
High Speed Serial Interface (HSSI) provides communication of VSELP and AMBE++ to and from the MPS.
iDAC
The HSSI link brings the coded voice from the MPS to the iDAC for interurban routing of voice. The EIA-232 interface is used for communication iDAC to and from the LMT. An RS-232 connection is the service access point between the LMT and the iDAC.
SR 16.0 05-Oct-07
The ethernet WAN (100 Base T) passes the coded voice over the internet using either ATM or IP virtual circuits via the central LAN, WAN and Remote iDACs. The ethernet LAN (100Base T) provides control to and from the DAPs and SNMP to the OMC-R.
iSG
The iSG receives its runtime software from the OMC via a 10BaseT ethernet interface. The iSG reports alarms, state changes, and statistics to the OMC over this interface. The 10BaseT ethernet interface also provides connectivity between the iSG and the DAP population, as well as between the iSG and the SPA. An additional 100BaseT ethernet interface provides a conduit for delivery of surveillance information to Law Enforcement. The iSG is equipped with an RS232 interface as a service access point. This allows a local maintenance terminal to be connected to the iSG to serve as a basic Man-Machine Interface (MMI) between field service personnel and the iSG. The iSG uses ethernet from the SPA to set up the DAP for surveillance. A voice path is setup by the DAP, APD and MPS to deliver voice over an HSSI frame relay connection to the iSG.
Call Flow
This subsection describes and illustrates a simplified view of the procedure for typical Dispatch calls. The descriptions and illustrations reference the key network elements involved, the order of the call flow, and what and how the system communicates at each step (links and protocols). A typical Dispatch call is comprised of the following three processes:
Establish radio link Route digital voice packets Duplicate packets There are different types of Dispatch calls. The group call is described below. The iDEN training classes offer additional information on all call flows for the Dispatch system.
Note
The following figures display the processes and related call flow steps listed in the TO Setup a Dispatch Group Call Procedure on page 3-21 table.
Figure 3-7
Figure 3-8
Figure 3-9
SR 16.0 05-Oct-07
TO
Group Call Proceeding The iSC acknowledges back to the SU that the Group Call Request is being processed (PCCH).
4 5
Group Call Request The iSC sends the Group Call Request Information to the HA-DAP. Group Call Page Request The HA-DAP sends a Group Call Page Request to each iSC in the location area registered as serving each SU. This message passes the Global Call ID assigned to the service request to the iSC.
6 7
Group Call Page Request The iSC passes on the request to the SU (PCCH). Group Call Page Response The SU responds with its Individual ID and Dispatch Group ID. Note that steps 5 and 6 may be reinitiated by the iSC for a short period of time, but not for the entire duration of the call. Thus an SU could miss a call if it is in a coverage null for an extended period of time.
TO
Group Call Page Response Information The iSC forwards the Group Call Page Response Information to the HADAP, indicating that the SU is present and that traffic resources are available and assigned.
10
Add Configuration The HA-DAP sets up the call and sends a message to the iVPU to establish the routing table for the call.
11
Group Call Start The HA-DAP sends a Group Call Start Notification to each iSC serving the call.
12
Group Call Grant The iSC sends a Group Call Grant message to the SU, setting up the call on a traffic channel (PCCH).
13
Voice SU units set up in the group call are now ready to exchange digital voice information, as follows (TCH).
Frame relay packets inbound to the MSO from the originating iSC contain the frame relay address of the packet duplicator handling the call, the Global Call ID, and the VSELP voice packet. Frame relay packets outbound to the iSCs from the iVPU contain the frame relay address of the serving iSC, the Global Call ID, and the VSELP voice packet.
SR 16.0 05-Oct-07
The Mobile Data Gateway (MDG) is the centerpiece element of the Packet Data subsystem. Its primary purpose is to execute the functions of a foreign agent as specified by the mobile IP standard. The MDG can be visualized as the new local address of the SU on a foreign link. It works in conjunction with the Home Agent (HA) router to receive forwarded packets from its home address and distributes these packets to the MS. The MDG also routes packets from the SU, in conjunction with the HA router, toward their final destination. The MDG receives operational information (datafill and downloaded on bootup) from the OMC-R and subscriber information from the iDEN Home Location Register (iHLR) from the DAP. The iHLR is a new database specifically designed for the Packet Data subsystem, and it provides subscriber provisioning for Packet Data and is the essential element for roaming. The MDG2 supports 5000 Packet Data subscribers. The MDG generates billing records for provided Packet Data service and sends them to the Billing Accumulator (BA). The Automatic Fault Recovery Process (AFRP) is a procedure used by the OMC-R to move an active Resource Allocation Group (RAG) from a failed MDG2 to a backup MDG2 to restore Packet Data service. This process has been replaced in the MDG4 (see the next section below).
The Mobile Data Gateway 4 (MDG4) was deployed in SR13.4 (and did not change in SR15.0) to replace the MDG2. The MDG4 supports up to 600K subscribers per MDG4 node versus 5000 in the MDG2. The scalability of the MDG4 node also provides the ability to address four subscriber targets: 256K, 384K, 512K, and 600K subscribers. MDG Cluster Management is a new concept in the MDG4 architecture that replaces the AFRP in the MDG2 architecture. MDG Cluster Management is managed by a mechanism resident to the pool of MDG4 nodes within the MDG4 cluster itself without OMC-R involvement.
Billing Accumulator
The Billing Accumulator (BA) is custom-built for inclusion in the iDEN Packet Data subsystem. Its function is to collect Packet Data subscriber usage data for billing purposes.
SR 16.0 05-Oct-07
The BA receives usage detail records from the MDG nodes using Billing Message Protocol (BMP) and Billing Session Protocol (BSP), which are proprietary protocols on top of TCP/IP. This usage data is stored on a hard drive on the BA until compilation. One BA is designated as the primary unit and the other is designated as the secondary or standby. Both BAs receive the same billing data from all MDGs (to accomplish mirroring). If the primary BA becomes unavailable, the secondary unit continues with data collection. The BAs control the designation of which one is the primary and which one is the secondary unit. Note The BA is also referred to as the Distributed Call Measurement System (DCMS).
The Authentication, Authorization, and Accounting (AAA) Server is customer-provided equipment that contains security-provisioning information for subscribers to Packet Data service. It is used in conjunction with the HA router for SU authentication that determines whether an SU is authorized to receive Packet Data service. Note The actual authentication of an SU takes place on the HA, not at the AAA Server (see the next section).
Home Agent
The Home Agent (HA) router resides on the home network of the SU and is aware of the SU home IP address. The HA performs Mobile IP registration by intercepting and forwarding (or tunneling) packets that are destined for the SU home link to the MDG that is currently serving the MS. The HA provides the necessary authentication procedures as outlined in the mobile IP standard. In conjunction with the AAA Server, the HA provides authentication procedures that determine whether the SU is authorized to receive Packet Data service, and whether it should forward the packets to the MDG.
3
MDG
The MDG communicates using two separate subnetworks: the Packet Data subnetwork and the Dispatch subnetwork. This is accomplished using two different methods. They are as follows:
Physical links:
MDG-to-MPS connection: V.35 FR Ethernet connection through the hubs to the interface router: MDG-to-both BAs: Proprietary protocols on top of IP/ethernet MDG-to-Interface Router: IP/ethernet MDG-to-OMC-R: V.35 FR routed through the MPS to the OMC-R MDG-to-DAP/iHLR: V.35 FR through the MPS to the DAP MDG-to-ACG: V.35 FR through the MPS, FR over the T1 span through the DCS, and out to the Access Controller Gateway (ACG)
Virtual links:
BA
Both BAs are connected to the MDG using proprietary protocols on top of IP/Ethernet. Each BA is connected to the HA via IP/Ethernet.
AAA Server
The AAA Server is connected to the Public LAN via IP/Ethernet. It uses the same protocol to connect to the HA.
HA
The HA is connected to the Public LAN via IP/Ethernet. It uses the same protocol to connect to the AAA Server.
3
Call Flow
Generally speaking, the purpose of Packet Data registration is to validate, activate, and maintain Packet Data service. Specifically, Packet Data registration performs the following:
Assigns the SU to a Mobile Data Gateway (MDG) node. Creates and populates a working record in the chosen MDG node. Negotiates Packet Data related parameters between the SU and the MDG. Initializes the RF link related procedures into a known state. Packet Data has several types of call flows. Packet Data registration is discussed here. The iDEN training classes offer additional information on all call flows for the Packet Data system.
Note
Figure 3-5 displays the processes and related call flow steps listed in the TO Setup a Packet Data Call Flow on page 3-27 table.
SR 16.0 05-Oct-07
TO
The SU makes a Packet Data Registration Request to the DAP via the iSC.
Packet Data Registration Request
The DAP selects an MDG to serve the SU. When the DAP chooses an MDG for a new Packet Data registration, it round-robins over the available MDGs to evenly distribute the load. The DAP sends PD information to the selected MDG node. 4 5 6 7
Packet Data Registration Response
The DAP backs up information sent to the SU in the registration accept for use in MDG failure recovery.
TO
The MDG determines final negotiated parameters from the proposed parameters and its own capabilities. It then creates a new subscriber working record that includes the IMSI, Individual ID, final negotiated parameters, etc. For more information on Packet Data call flows, see Packet Data Service Registration And Call Processes (68P80801C90).
SR 16.0 05-Oct-07
The OMC-R system operators interface to the iDEN network. The Backup server provides automated tape backups for the urban. The Network Time Server provides time of day clocking
3
OMC-R System Processor is the communication link and resource manager within the OMC-R for the following:
Event/Alarm Management Fault Management Performance Management Configuration/Load Management Security Management The OMC-R is either an Enterprise 3500 or Sun Fire 4800.
Note
Man-Machine Interface
Man-Machine Interface (MMI) is a Graphical User Interface (GUI) that allows you to enter commands and review alarms within the OMC-R. Note The MMI is either a Sun Ultra 60 or Sun Netra 20.
The Sun Ray terminals allow operators to login to the OMC-R and perform network management and system maintenance tasks. The Sun Ray terminal is connected to the system processor via the LAN to run various Graphical User Interface (GUI) applications.
Ethernet
Many of the network elements connect to the OMC-R via the Central LAN.
The Network Time Server (NTS) is part of the Larus BITS equipment. It provides the Time-of-Day time stamps to the OMC-R equipment via a NTS Output Card.
The legacy X.25 Packet Switch takes packets from the OMC-R Processor V.35 links. The switch reads the packets' addresses and then selects the right link for the addressed network node. The Packet switch re-transmits packets on the V.35 link outbound for addressed network nodes.
The legacy Cisco WAN Manager (CWM) is a graphical user interface that allows system operators to select which tool or application to launch for Metro Packet Switch (MPS) management.
SR 16.0 05-Oct-07
The legacy Performance Technology Inc. (PTI) ZT5088 X.25 hub is needed for legacy BSCs. The X.25 hub forwards traffic between the OMC-R and legacy BSCs via an integrated Ethernet switch.
The legacy Network Access Server (NAS) receives packets coming from the Packet Switch on many V.35 links. NAS grooms each V.35 signal onto a T1 DS0.
Disk array, backup server (Sun Netra 240 and C4 StorEdge tape juke box), and LAN gigabit Ethernet switch Routers, packet switch or channel bank, network access server, frame relay hardware, network timing hardware
3
Event/Alarm Management Fault Management Performance Management Configuration/Load Management Security Management
Event/Alarm Management
Events/Alarms are generated at a network element and sent to the OMC-R via an X.25 network. The operator will be informed only of those events/alarms to which a subscription has been made.
Fault Management
If an operator is notified of system fault by event/alarm management, the fault management features enables an operator to do the following:
Pinpoint location of the fault by remotely running diagnostics. Isolate the fault by changing state of a device.
Performance Management
Performance Management provides system operators with information necessary for efficient network management and indications of quality of service failures. Performance Measurements are as follows:
Quality of Service Call setup delay as an example Availability Beginning and end times of service unavailability
Configuration/Load Management
Configuration and Load Management is used to change any managed network elements operating parameters (for example, the RF frequencies of an EBTS) that are stored in the network elements configuration file. Using event/alarm and performance management information, this provides the ability to fine-tune the performance of the network elements.
Security Management
Security Management includes Network/System Security and Data security. The security feature includes the following:
Each operator has a unique user name and associated password to gain access to the system. System administrators can add/change/remove passwords. System Processor checks validity of source address and validity of user on all incoming traffic on X.25 network. All login attempts are logged by operator ID, terminal ID, and time. Operators are assigned security classes specifying what commands operators may access and what output messages can be seen. Users are logged off by system after a specified interval if no activity is seen.
3
For monitored and not-monitored Network Elements in Legacy, Mixed Mode, and Next Generation Dispatch. OMC-R to/from the MDG Frame Relay: Operations and Maintenance Links to the MDGs OMC-R to/from the OMC-R X.25 Packet Switch X.25: Event and Alarm and Statistics reporting, and configuration/load files Legacy BSCs, EBTSs A high-speed Ethernet LAN is used within the network management subsystem (E0 LAN) and is optional on the rest of the LAN (E1 LAN).
V.35
SR 16.0 05-Oct-07
The describes the collection flow of status information with the iCP. Note The NAS and Packet Switch are not used in this process. Figure 3-14 iCP Status Information Flow (Collection)
TO
Step 1 2 3
The following describes the reporting flow of status information for the iBSC. Note The NAS and Packet Switch are not used in this process. Figure 3-15 iCP Status Information Flow (Report)
TO
Step 1 2 3 4 5
SR 16.0 05-Oct-07
Converts between 3G Session Initiated Protocol (SIP) signaling messages and iDEN proprietary signaling Converts the standard RTP to the iDEN RTP (iRTP), and also checks RTP packet integrity Provides protocol interworking between proprietary iDEN system and the 3G Access Networks Provides pseudo-HLR (P-iHLR) function by providing the iDEN system with the next hop route to the 3G network Transcoders interwork 3G Voice over IP (VoIP) packets and iDEN Dispatch VoIP packets Provides voice transcoding and de-jitter between the iDEN vocoders and the 3G Access Network Transcoding between VSELP and AMBE++ (on the iDEN side) to SMV, EVRC, and G.711 on the 3G side
XCDR - Transcoder
See Figure 3-16 for information about iGW in the iDEN network.
The network elements in the iGW subsystem are connected to and communicate with each other and other iDEN network elements as described below. iGW The iGW is linked using ethernet. The protocol used between the OMC-R is SNMP/UDP. VSGW The VSGW is linked using ethernet. The protocol used between:
XCDR is:
Megaco RTP/UDP
SIP/UDP SIP/TCP
SR 16.0 05-Oct-07
XCDR The XCDR is linked using ethernet. The protocol used between:
File Transfer Protocol (FTP) Link Version Control Protocol (LVCP) Real-Time Protocol (RTP) Session Initiation Protocol (SIP) Session Description Protocol (SDP)
SDGC Calls The iGW utilizes the following protocols in a SDGC call:
File Transfer Protocol (FTP) Link Version Control Protocol (LVCP) Real-Time Protocol (RTP) Session Initiation Protocol (SIP) Session Description Protocol (SDP)
FTP Call detail records are transferred by VSGW to the billing server using FTP. LVCP The LVCP provides the interface for peer DAPS to exchange Selective Dynamic Group (SDG) parameter information. If the LVCP message contains an invalid SDG parameter, then the receiving DAP sends an alarm to the OMC-R and utilizes the default configuration for the SDG call. RTP The bearer path for a private Dispatch call carries the Voice over IP (VoIP) packets between SIP and iDEN domains.The RTP header carries the payload type, sequence number, and timestamp. The payload type field distinguishes between media packets, inband (tone) signaling, fax, or noise packets. The sequence number detects missing or out of sequence packets. The timestamp is used for jitter control and silence suppression recovery.
SIP SDP creates the encoding type, origin and destination IP addresses, UPD port numbers session start/stop time, and other attributes. SDP parameters are carried in any SIP request or response message body. When a Session Initiation Protocol (SIP) Invite request is received, the Invite includes the group identifier or group list, and the SDP offer. The SDP offer includes call details such as vocoder, and User Datagram Protocol (UPD) port number. The transcoders use the SDP information to convert 3G Voice Over IP (VoIP) packets to VSELP or AMBE++, and convert Real Time Transport Protocol (RTP) to the iDEN version of RTP (iRTP). SDP The iDEN network interacts with various 3G and broadband next generation networks. These networks utilize high-speed access, multimedia and data services over IP, and call control signaling using the SIP and the SDP. SIP is an open standard that is used in various 3G networks, while iDEN uses proprietary Motorola signaling. The iGW translates the signaling standard between the SIP domains and the iDEN network.
Call Flow
The private call setup flows create a signaling path and a bearer path. During the call, the signaling path is used for talker arbitration (floor control), and call tear-down. The bearer path for a private call carries the voice over IP (VoIP) packets between the SIP and iDEN domains. Note There are different types of iGW calls. The Selective Dynamic Group Call (SDGC) Session Initiated Protocol (SIP) to iDEN (S2D) call is described below. The iDEN training classes offers additional information on all call flows for the iGW system. Figure 3-17 displays the processes and related call flow steps listed in the Follow the Call Flow for a iGW SIP to S2D table on page 3-39.
SR 16.0 05-Oct-07
TO
Follow the Call Flow for a iGW SIP to S2D Step 1 Action
Invitation Request
The 3G network forwards a Session Initiated Protocol (SIP) Invite request from a 3G subscriber. The Invite includes the group identifier or group list and the Session Description Protocol (SDP) offer. SDP includes call details such as vocoder, UDP port number, and so on. 2
Session Progress
The VSGW sends a 183 Session Progress response to the Invite, and forwards a Proxy Page Request Forward message to the HA-DAP. 3
Reserve Resource
The VSGW reserves Transcoder (XCDR) resources using the Vocoder Resource Reservation Request. 4
Reserve Response
The Transcoder (XCDR) allocates the resources and replies with Vocoder Resource Reservation Response. 5
Page Response Forward
After an iDEN subscriber responds to the page, the HA-DAP sends an SDGC Page Response Forward message to the VSGW.
TO
Follow the Call Flow for a iGW SIP to S2D (continued) Step 6 Action
Dispatch Call Connected
The VSGW sends a response ('200 OK') to the 3G endpoint signaling that the Dispatch call is connected. 7 8
Endpoint Acknowledges
The 3G endpoint sends an acknowledgement message (ACK) to close the Invite process.
Floor Control
The VSGW interworks the floor control signaling. The Selective Dynamic Group Call is set-up and active
SR 16.0 05-Oct-07
Chapter 4
New Features in SR16.0
In This Chapter Topic See Page
SR16.0 Feature Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . 4-2 iDEN Gateway Dispatch Roaming Registration . . . . . . . 4-2 Private Call on 900 MHz (PC) . . . . . . . . . . . . . . . . . . . . . . 4-2 Selective Dynamic Group Call on Secondary Bands (SDGC) . . . . . . . . . . . . . . . . . . . . . . . . 4-2 Lifetime Audit Registration Renewal Required . . . . . . . 4-3 OMC Password Policy Enhancements . . . . . . . . . . . . . . 4-3 5K Sites in an Urban in Full NGD, Legacy OMC and NGO Capacity . . . . . . . . . . . . . . . . . . . 4-3 OMC Display & Process Enhancements. . . . . . . . . . . . . 4-4 RMA (Retuning Management Application) . . . . . . . . . . . 4-4 EBTS Re-rack with Dual Band RFDS . . . . . . . . . . . . . . . 4-4 Global Positioning System Denial Response . . . . . . . . 4-5 Feature Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-7
channel resources will be assigned only if there are no 800U (and by consequence, 900 resources) channel resources available in the current cell. The 900 band resources will be assigned for SDGC participant(s) in a particular cell only if all of the following conditions are met:
ACG has suitable 900 band resource available: 900 I+N threshold rules will be applied for 900 band resource assignment. ACG is configured to use 900 band for SDGC and 900 band resources The controlling DAP supports SDGC on 900 band All participating SUs in the cell are Private Call on 900 Capable.
If there are not enough 900 capable SDGC subscribers, there may be a situation where most of the SDGC calls are assigned to the 900 band and then moved to a lower band once a new 800U only SDGC user joins the call. In this case, depending on the rate SDGCs are moved, the operator may wish to turn off SDGC on 900 Bands until a greater penetration of 900 capable users are added. Note Legacy group call will not be supported for secondary band.
SR 16.0 05-Oct-07
This feature provides better security on the NGO OMC, allowing the administrative operator to control access. It enforces requirements on password creation, change, validation, and OMC-R internal requirements such as encryption. The NGO OMC-R will conform to the following policies for passwords:
Password a minimum of seven (7) characters in alphanumeric format Passwords will not be stored in plain-text format in scripts, files, or executable programs Maintain history of previous used passwords to prevent users from reusing a password for a minimum of five (5) cycles Required to change passwords at least once every ninety (90) days User IDs disabled if inactive for 60 consecutive days User ID's at least 8 characters in alphanumeric formats Authorized user (root) will have the ability to turn on/off the password policy. Password files must be encrypted. Passwords will be on a Urban basis. Only one login will be needed for any OMCR in an Urban.
4
Two iSC3s One EAS Up to nine Quad2 BRs, housed in groups of three, Three sets of RFDS components, in a 3-sector, 3-branch diversity configuration.
The duplexers in the DMS rack support both 800 and 900 MHz frequencies, thus eliminating the need for a duplexer. The Quad2 BRs are software configurable from the OMC for 800 or 900 MHz frequencies. The new dual band RFDS supports the following spectrum:
Re-band Spectrum
SR 16.0 05-Oct-07
The Single Rack is shown below: Figure 4-1 Single EBTS Rack
The following new ACG alarms are reported with this feature:
GPS Key-up Override Alarm sent when configuration parameter is enabled and the site has proceeded to key-up with its time reference out of specification. GPS Spoofing Warning sent when ACG detects an abnormal condition that might indicate spoofing of the GPS signal. GPS tolerance Event sent before a reset that is related to an operator clearing tolerance parameters.
These parameters are tunable from both the Urban and the Site screens with settings from the Urban screen overriding those of the Site screen.
SR 16.0 05-Oct-07
Feature Matrix
The table below shows the major features of SR 16.0 and the Network Elements impacted by these features.
NETWORK
HA DAP
Feature
OMC-R
MDG4
EBTS
iBSC
DAP
iGW
X
X X X X X
5,000 Sites OMC Display, Process & Password GPS Denial Response Automated Neighbor list
SU
IP
NOTES...
SR 16.0 05-Oct-07
Chapter 5
iDEN System Functional Description
In This Chapter Topic See Page
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3 Network Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-4 Green Field Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-4 Split and Mixed Mode Systems . . . . . . . . . . . . . . . . . . . . 5-5 Network Links and Protocols . . . . . . . . . . . . . . . . . . . . . . . . . 5-9 iDEN Network Protocols. . . . . . . . . . . . . . . . . . . . . . . . . . 5-9
Green Field Systems...................................................................... 5-9 Split and Mixed Mode Systems .................................................... 5-12
X.25 .............................................................................................. 5-19 Ethernet........................................................................................ 5-20 Simple Network Management Protocol ........................................ 5-20 Vector Sum Excited Linear Predicting.......................................... 5-20 Frame Relay................................................................................. 5-20 Pulse Code Modulation ................................................................ 5-20
End-to-End Call Flows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-21 Radio Access Call Flow . . . . . . . . . . . . . . . . . . . . . . . . . 5-21 Interconnect Call Flow . . . . . . . . . . . . . . . . . . . . . . . . . . 5-22 Dispatch Call Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-24 Packet Data Call Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . 5-25
SR 16.0 05-Oct-07
Introduction
This chapter discusses two versions of SR15.0. They are as follows:
Green Field A new iDEN system. Mixed/Split Mode A system configuration characterized by a mixture of Legacy network elements (i.e., DAPs, APDs, iDACs) and new network elements (i.e., HA-DAPs, iVPUd/di). It has a site configuration characterized with MOBIS, SNMP, and Interconnect voice traffic being carried by circuit switch channels and Dispatch traffic being carried by a Frame Relay channel.
Network Architecture
This section describes the basic iDEN network using system diagrams that show the system with its subsystems and their major components. Please note the following:
See Table 5-1 Key to iDEN System Components Diagrams on page 5-5 for definitions of the components illustrated in the following diagrams. The components and linkages discussed are for a basic iDEN system. Your system may differ.
5
SR 16.0 05-Oct-07
Table 5-1
iDEN Components
AAA ACG
APD
Table 5-1
iDEN Components
BA
CALEA
CWM
DAP DCS
EBTS EC EGT4
International Mobile Station Identifier (IMSI) ranges associated with a particular iHLR IP addresses of all iHLRs and D-VLRs
SR 16.0 05-Oct-07
Table 5-1
iDEN Components
ENET
iCP
iDAC
iGW
Table 5-1
iDEN Components
MDG
MGX
Provides connectivity and interface to the network elements. Transfers the frame relay connections to a single ATM trunk to the BPX for switching. Packages frame relay packets on to ATM packets and routes these to the BPX for delivery to the correct network element.
MPS NDM
Mobile Package Switchroutes Dispatch and Packet Data packets to the proper Dispatch subsystem element. Network Dispatch Messengeris an optional feature that provides dispatch services and messaging capabilities to end-users operating in an IP environment who want to communicate with iDEN subscriber units or other Dispatch Messenger units. Operations and Maintenance Center-Radioestablishes, maintains, collects information about the network, and is available to the system operator. Packet Dataallows a carrier to supply their subscribers Internet Protocol (IP)-based network access to either the Internet or their own networks. Point of DemarcationTap point on the HSSI interface to connect the Dulch comp/analyzer to collect data on the link. RF Distribution Systemconnects the base radios to the RF antennas. Supernode Data Manager / Fault Tolerantallows law enforcement agencies to obtain call data records as well as intercept audio in an Interconnect phone call for court authorized monitoring. Short Message Serviceprovides the functionality of receiving short text messages. Spectrum Peripheral Module Enclosurecabinet that houses the SPM. Signaling Transfer Pointsupports SS7 connectivity to networks switches. Voice Mailcontains stored voice mail for the subscriber. Transcoderconverts between PCM from the MSC and VSELP to the EBTS.
OMC-R
SR 16.0 05-Oct-07
SR 16.0 05-Oct-07
Table 5-2
E1
Connection to the MSC and to the EC Connection to the OMC-R and to theE1 Switch Connection to the NDM and to the DACS Connection to the DSX and to the DACS Connection to the EBTS and to the DSX Connection to the E0 Switch and to the Network Time Server Connection to the E0 Switch and to the E0 Router
Ethernet
Connection to all ISP/Internet and all LAN/WAN networks. Connections within the Network Managements components Connections between the VM, SMS, and IVR
Connection to the EBTS and to the DACS Connection to the DACs and to the iCP Connection to the EBTS and to the DACS
MTL OC3
Connection to the DACS and to the MSC Connection to the MSC and to the EC and iVPU Connection to the DACS and to the iVPU
OML
Connection to the DACs and to the iCP Connection to the iBSC and to the LAN Connection to the iVPU and to the LAN
Connection to the MSC and to the VM, IVR, EC and iVPU. Connection to the EBTS and to the DACS Connection to the MSC and to the VM, SMS, STP, and PSTN. Connection to the PSTN and to the STP
STM1
Connection to the MSC and to the EC and iVPU Connection to the DACS and to the iVPU
Table 5-2
VSELP
Connection to the DACS and to the iVPU Connection to the EBTS and to the DACS
Figure 5-4 Legacy Network Protocol and Links Diagram on page 5-13 illustrates the various protocols used throughout the basic legacy SR15.0 iDEN network. Table 5-2 Key to Network Protocols and Links Diagram on page 5-11 explains where the protocols and links are used in a legacy SR15.0 iDEN system. This diagram does not illustrate all possible configurations.
SR 16.0 05-Oct-07
Figure 5-4
Table 5-3
E1
Connection to the MSC and to the EC Connection to the DACS and to the MPS Connection to the XCDR and to the MSC Connection to the DSX and to the MSC Connection to the OMC-R and to the E1 Switch Connection to the NDM and to the DACS Connection to the E0 Switch and to the Network Time Server Connection to the E0 Switch and to the E0 Router Connection to the EBTS and to the DSX Connection to the DSX and to the DACS
E3 Ethernet
Connection to the DSX and to the MPS Connection to all ISP/Internet and all LAN/WAN networks. Connections within the Network Managements components Connections between the VM, SMS, and IVR
Frame Relay
Connection to the DSX and to the DACS Connection to the MPS and to the DACS, BSC, V.35 Patch, DSX, NDM, POD, Hi Speed V.35 Patch Connection to the POD and to the APD, iDAC, iSG Connection to the V.35 Patch and to the MDG2 Connection to the Hi Speed V.35 Patch and to the MDG4 Connection to the TDAP (R12K) and to the DSX Connection to the TDAP (R10K) and to the V.35 Patch
HSSI
Connection to the MPS and to the PODs Connection to the PODs and to the APD, iDAC, iSG
SR 16.0 05-Oct-07
Table 5-3
Key to Legacy Network Protocols and Links Diagram (continued) Where Used
Connection to the DACs and to the iCP Connection to the DSX and to the DACS Connection to the CP and to the DSX Connection to the iCP and to the DSX
OC3
Connection to the MSC and to the EC and iVPU(i, d, di) Connection to the DACS and to the iVPU(i, d, di)
OML
Connection to the DACs and to the iCP Connection to the CP and to the DSX Connection to the iCP and to the DSX Connection to the XCDR and to the DSX Connection to the iVPU(i, d, di) and to the LAN Connection to the iBSC and to the LAN
PCM
Connection to the MSC and to the VM, IVR, EC, IWF, and iVPU(i, d, di). Connection to the XCDR and to the MCTM cabinets
SNMP
Connection to the EBTS and to the DACS Connection to the CP and to the DSX Connection to the iCP and to the DSX Connection to the DSX and to the DACS Connection to the DACS and to the NDM
SS7
Connection to the MSC and to the VM, SMS, STP, and Channel Bank. Connection to the PSTN and to the Channel Bank.
STM1
Connection to the MSC and to the EC and iVPU(i, d, di) Connection to the DACS and to the iVPU(i, d, di)
T1
Connection to the MSC and to the EC Connection to the DACS and to the MPS Connection to the XCDR and to the MSC Connection to the DSX and to the MSC Connection to the NDM and to the DACS Connection to the EBTS and to the DSX Connection to the DSX and to the DACS
T3
Table 5-3
Key to Legacy Network Protocols and Links Diagram (continued) Where Used
Connection to the MPS and to the Hi Speed V.35 Patch Connection to the MPS and to the V.35 Patch Connection to the V.35 Patch and to the MPS Connection to the MPS and to the OMC-R
VSELP
Connection to the DACS and to the iVPU(i, d, di) Connection to the EBTS and to the DACS Connection to the XCDR and to the DSX Connection to the DSX and to the DACS
Part Number
68P81129E05 68P81129E03 68P81129E04 68P81129E13 68P81129E11 68P81129E12 68P81129E02 68P81129E01 68P81127E99 68P81127E88 68P81127E89 68P81127E90 68P81127E96 68P81127E93
SR 16.0 05-Oct-07
Table 5-4
Part Number
68P81127E95 68P81129E10 68P81127E91 68P81127E98 68P81129E08 68P81127E97 68P81127E92 68P81129E15
Network Links
The iDEN system uses several interface types. The transport facilities between the DACS may include microwave, fiber, and/or copper. The iDEN system uses the following links:
Coaxial
Within the system, coaxial cable is the physical interface between the EBTS BRs and the antenna system. Coaxial cable is also used between the BRs and the EBTS site controller.
HSSI
The APD contains four High Speed Serial Interface (HSSI) ports that are opened and closed under the control of the MTX board. When a port is opened the incoming packet is duplicated to the output ports to the EBTS sites as determined by the mobility management of the DAP. The iDAC has one High-Speed Serial Interface (HSSI) interface which transports the voice packets between the iDAC and the Switching Modules. The HSSI interface can carry up to 10 Mega bits per second (Mbps) of traffic (which includes all the packets for voice and control). The iDAC uses this interface to communicate with the frame relay devices.
The Tandem DAP uses High Speed Serial Interface (HSSI) to interface to the APDs.
V.35
The OMC-R Operations and Maintenance Link (OML) uses V.35 cabling between the OMC-R and the DAP. The connection to the BSC may also be V.35 cabling.
EIA-232
The OMC-R, OML also uses EIA-232 cabling to the switched circuits (MSC).
Twisted Pair
Thin-wire (10baseT) ethernet may be used between the OMC-R and the TDAP. A minimum of Category 5 cable and connections is required. Other elements may be optionally networked by Ethernet. OMC-Rs may be networked using an Ethernet link.
The T-DAP and the iHLR communicate to the OMC-R with this interface. The DAP and the iHLR communicate with each other over the twisted pair. DAP to DAP communication occurs over the twisted pair.
T1/E1
The iDEN network has standardized on the T1 as the physical interface. The exchange of information between the various pieces of equipment is over direct link or dedicated facilities using T1 (E1) industry standards. Full or fractional T1s are used. In the MSO, the network devices are Interconnected with T1s. Optimization, load balancing and capacity planning will be affected by the physical backbone used. Network and iDEN equipment is ordered with the appropriate hardware to support either E1 or T1 facility. The equipment may be re-configured between the span types by replacing hardware. Since an E1s have a different configuration than T1s, reconfiguring may impact the capacity of network elements.
Interface Protocols
The iDEN system supports and uses several protocols and interface standards. These include the following:
AFR Radio Link Protocol Motorola Implementation A-Bis interface Signaling System Seven X.25 Ethernet Simple Network Management Protocol
SR 16.0 05-Oct-07
AFR
All Frame Relay (AFR) is used on the OMC-R, iBSC, EBTS, and iVPU.
Radio Link Protocol (RLP) is the method of transferring compressed character data between the SU and the MSC-IWF during circuit switch data networking.
Motorolas implementation of GSM A-bis (MOBIS) is a modified version of the GSM A-bis interface. MOBIS provides increased timing and error handling capabilities to provide greater radio link reliability and improved link quality. These modifications include changes in message format and the inclusion of additional parameters for handover messaging. Modifications adapt the A-bis standard to the RF link sideband and help to assure timing and mapping of radio link data packets.
Signaling System Seven (SS7) is the routing and control interface between the other provider networks and the MSC and from the MSC to the BSC-CP. SS7 is used by the Short Message Service and Voice Mail. Interconnect calls use SS7 at the BSC. This SS7 is the Message Transfer Link (MTL) and is implemented in accordance with ANSI SS7-ISUP. CCITT SS7 signaling is used between the MSC and the HLR and the MSC and the SMS-SC.
X.25
The X.25 protocol is used primarily in the operation and maintenance functions of the network. The X.25 link is used by the operation and maintenance Link (OML) and its functions between network devices linked to the OMC-R. The OMC-R uses X.25 - Link Access Protocol -B-Channel (LAP-B) and Link Access Protocol - Data (LAP-D). The LAP-D protocol is used in signaling as the Message Transport Protocol (MTP) between the EBTS and the BSC. LAP-B is the protocol used between the BSC and the OMC-R. The BSC handles the translation between LAP-D and LAP-B. X.25 functions primarily at 19.2Kbps on V.35 cables (including the DAP). The T-DAP uses an Ethernet link but maintains X.25.
Ethernet
The principle use of ethernet in the iDEN system is between components in the EBTS. The EBTS uses ethernet for traffic and control by running Carrier Sense Multiple Access/Collision Detect (CSMA/CD) protocol. Transmission Control Protocol/Internet Protocol (TCP/IP) is supported in the ethernet environment in accordance with IEEE 802.X. TCP/IP may be used to communicate with the administration and data and other service provider equipment. The TDAP - OMC-R interface is ethernet running X.25 protocol.
Simple Network Management Protocol (SNMP) is used over the dedicated links between the EBTS and BSC. SNMP is used within the X.25 OML to manage the alarm, messaging, control and performance data routing to the EBTS.
Vector Sum Excited Linear Predicting (VSELP) protocol is a voice compression method used in the mobile station and the XCDR. The SU compresses voice for all transmits and decompresses on voice on all receives. The MPS-DAP cluster controls the movement of VSELP packets between EBTSs during Dispatch calls. The BSC-XCDR converts VSELP to Pulse Code Modulation (PCM) for Interconnect calls.
Frame Relay
The iDEN system supports frame relay protocol during Dispatch communications. Depending on how the EBTS is configured for Interconnect and Dispatch, the number of DS0s available will vary. The iDEN system engineering staff provides guidelines to allow service providers to provision the T1/E1s for Interconnect and Dispatch. The Advanced Packet Duplicators (APD) and iSC are linked using Frame Relay - LAP-D on a V.35 link. This is an indirect link through the MPS. The speed is at least 256kbps from the DAP to the MPS, and is set by the MPS. The T-DAP uses High Speed Serial Interface (HSSI) to interface to the APDs. For additional information, see EBTS Backhaul: Frame Relay DSO Requirements (WP2002-025).
Pulse Code Modulation (PCM) is the telephone industry standard for the format and encoding of data packets transmitted across voice networks. PCM is used by iDEN for Interconnect calls to and from land networks. The MSC and the BSC XCDR use PCM for voice and data transmission. The XCDR converts between PCM from the MSC and VSELP to the EBTS.
SR 16.0 05-Oct-07
The SU transmits a signal to the antennas. The Radio Frequency Distribution System (RFDS) routes radio frequency signals from the site receive antennas, into the EBTS, and back out to the site antennas for transmission. The iMU/EAS/EAS2 receive alarm signals from various EBTS site equipment and sends them to the iSC for processing. The iSC performs all control and synchronization functions for the EBTS. The Base Radio (BR) sends and receives control information and compressed voice data. The Digital Cross Connect Switch (DCS) is a standard, programmable telephone industry device that provides a transport system within the iDEN network. Radio Access Call Flow
Figure 5-5
The SU sends and receives voice data. The EBTS converts the radio link to the land link and discriminates between Interconnect and Dispatch calls. The iBSC routes Interconnect packets between the EBTS and the MSC. The MSC determines services and location information. The MSC also controls and routes the calls to other providers. The iBSC-iCP routes control packets. The iBSC-iVPU handles voice circuits subrates and PCM CICs.
The procedure for Interconnect call flow process consists of the following registration, channel access, and data transfer steps. TO Setup an Interconnect Call Action
Make a Call (origination) a) Send Random Access Protocol (RAP) on PCCH
Step 1
b) c) d) e) f) g) 2
Mobile gets a dedicated control channel assigned Authentication (optional) Call setup transaction Get assigned to a traffic channel Call termination request on associated control channel Channel released
Get a Call (receive) a) Locate SU by last known Interconnect Location Area (ILA)
b) c) d) e) f) g) h) i)
Page SU on primary control channel of all sites in ILA RAP on primary control channel Get dedicated control channel assigned Authentication (optional) Call setup transaction Get assigned to a traffic channel Call termination request on associated control channel Channel released
SR 16.0 05-Oct-07
Figure 5-6
Note
See Interconnect Subsystem on page 3-9 for a more detailed call flow.
The SU sends and receives voice data. The EBTS converts the radio link to the land link and discriminates between Interconnect and Dispatch calls. The Dispatch Application Processor (HA-DAP) assigns signaling and routing paths for Dispatch and Packet Data calls. The iDEN Home Location Register (HA-iHLR) authenticates the SU and registers the Dispatch and Packet Data in the iDEN system. Enhanced Global Title Translation Table Tool (EGT4) provides International Mobile Station Identifier (IMSI) ranges associated with a particular iHLR and the IP addresses of all iHLRs and D-VLRs. The iDEN Surveillance Gateway (iSG) provides law enforcement surveillance capability for the iDEN Dispatch and Packet Data subsystems. Network Dispatch Messenger (NDM) allows a direct connection between your PC and the iDEN phone.
The simplified steps of a typical Dispatch call are: TO Setup a Dispatch Call Action
A Dispatch call is requested via PTT activation. The call request packet is routed to the DAP.
Step 1 2 3 4 5 6
The DAP recognizes the MSs group and finds the group members by location area (DLA). The DAP sends location requests to the member's DLA to obtain current sector or cell. The group member MSs respond with eachs current sector or cell location. The DAP supplies each members routing information to the MPS and PD. Call voice packets are sent to the PD, where they are replicated and distributed to the MSs sites.
SR 16.0 05-Oct-07
Figure 5-7
Note
See Dispatch Subsystem on page 3-17 for a more detailed call flow.
5
The SU sends and receives voice data. The EBTS converts the radio link to the land link and discriminates between Interconnect and Dispatch calls. The MDG (Mobile Data Gateway) directs connections to the Internet. The Home Agent (HA) routes Packet Data information to and from the SU. The Billing Accumulator (BA) collects and stores billing records that are produced by the MDG and provides an avenue for the stored billing records to be transferred to an outside billing center. The Authentication, Authorization, and Accounting (AAA) Server authenticates Packet Data.
The procedure for Packet Data call flow process consists of the following registration, channel access, and data transfer steps. TO Setup a Packet Data Call Action
Registration for Dispatch service (in home or visited network) and authentication occurs. When the SU is outside its home network, registration for Dispatch service is processed differently.
Step 1
2 3 4 5 6
Registration for Packet Data service occurs. SU initiates a data transfer session to gain access to a packet channel and goes into Session Active state. Registration for mobile IP service and tunnel setup occurs. MDG initiates a session for the targeted MS. Session reconnect/reselect during cell changes occurs.
SR 16.0 05-Oct-07
Figure 5-8
For more information see iDEN Packet Data Service Registration and Call Processes (68P80801C90). Note See Packet Data Subsystem on page 3-23 for a more detailed call flow.
NOTES...
SR 16.0 05-Oct-07
Chapter 6
Radio Access Network Details
In This Chapter Topic See Page
Purpose. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3 Radio Access Network Subsystem Description . . . . . . . . . . 6-4 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-4
References ..................................................................................... 6-4
Enhanced Base Transceiver System . . . . . . . . . . . . . . . . . . 6-10 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-10 EBTS Hardware Description . . . . . . . . . . . . . . . . . . . . . 6-10
ACG ............................................................................................. 6-13 iSC3 ............................................................................................. 6-13 iSC2 ............................................................................................. 6-14 iMU and EAS................................................................................ 6-16 RFDS ........................................................................................... 6-16 Base Radio................................................................................... 6-18
SR 16.0 05-Oct-07
Purpose
This chapter describes the major network elements of the Radio Access Network subsystem in terms of their key hardware components, software function, and network management facilities. Each key component is explained, including its key functions, hardware and software descriptions, and the network management.
This section describes the iDEN Radio Access Network Subsystem, including key hardware components, software function, and network management facilities.
References
The following is a list of technical documentation and training courses that support the EBTS:
EBTS Fundamentals Course (FND104) EBTS Operations & Maintenance Course (OPM203) EBTS Troubleshooting Guide (6881012Y79) EBTS Volume 1 System Installation and Testing (68P80801E35) EBTS Volume 2 Base Radios (68P80801E35) EBTS Volume 3 RF Distribution Systems (68P80801E35) 900 Quad EBTS Multi-Sector Rack Configuration (WP05008B) iDEN EBTS Fan Maintenance Plan (WP2003-14) iDEN Integrated Site Controller System Manual (68P81098E05) iDEN Gen 3 Site Controller System Manual (68P80801E30) iDEN EBTS Total Site Maintenance (WP2001-010) Ethernet Analyzer Detective Software User's Manual (68P81001Y55) Inbound Handover Threshold Optimization (WP2002-026)
These documents and others are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com unless otherwise stated.
SR 16.0 05-Oct-07
Figure 6-1
The Next Generation Dispatch (NGD) provides higher capacity and greater system reliability in a significantly smaller footprint. It is characterized by All Frame Relay (AFR), split backhaul, or a combination of configurations. The main changes to the iDEN network with the NGD system are:
The iDEN Vocoder Processor Unit (iVPU) is designated with respect to its configuration:
iVPU Dispatch (iVPUd) iVPU hardware operating with only Dispatch and Packet Data functions. (There is no Interconnect transcoding.) The iVPUd supports Split Mode Backhaul EBTSs only. iVPU Dispatch Interconnect (iVPUdi) Software upgrade to support Dispatch and Packet Data functions in addition to Interconnect transcoding. Supports Split Backhaul and AFR Backhaul EBTSs. iVPU Interconnect (iVPUi) Interconnect transcoding. Software upgrade of iVPUs in the field default to this mode. Supports Split Backhaul EBTS only.
The NGD system configuration impacts the EBTS in that all EBTSs communicate via the iVPUs. (See Figure 6-2 NGD Configuration on page 6-6.)
Single Software Load One Network Element from the perspective of the OMC
Figure 6-2
NGD Configuration
The NGD configuration allows the EBTS to automatically determine which backhaul mode (Split or AFR) to use in order to communicate with the Northbound network and to communicate with the new NGD NEs. The backhaul is determined through backhaul negotiation rather than configuration. The NGD architecture supports various configurations, including: Mixed Mode Split Mode All Frame Relay
Between Legacy and NGD architecture, the Mixed Mode network is considered a transitional phase. Mixed Mode is a system configuration that is characterized by a mixture of Legacy network elements (e.g., BSCs, DAPs, APDs, iDACs) and new NGD network elements (e.g., HA-DAPs, iVPUs). See Figure 6-3 NGD Mixed Mode Configuration on page 6-7. In the Mixed Mode phase of the iDEN architecture, the network elements that are required for Next Generation Dispatch are migrated into the Legacy architecture. These network elements can coexist with the legacy network elements.
SR 16.0 05-Oct-07
Figure 6-3
Split Mode
The NGD network also supports the Split Mode site configuration which is characterized with Mobis, SNMP, and interconnect voice traffic being carried by circuit switch channels and dispatch traffic being carried by a Frame Relay channel. The Mobis and SNMP channels must terminate on an iCP or legacy Base Site Controller (BSC). Interconnect voice traffic can terminate on either a legacy Transcoder (XCDR), iVPUi, or iVPUdi. The Frame Relay traffic can terminate on a legacy Dispatch Application Processor (DAP), Mobile Data Gateway (MDG), iVPUd, or iVPUdi.
All Frame Relay (AFR) is supported in both Mixed Mode and in the final NGD configurations. AFR trunks carry signaling, interconnect, dispatch and packet data traffic over Frame Relay. In the AFR configuration, all traffic is to be packetized. The EBTS traditional circuit traffic (O&M, Mobis, and interconnect bearer) must be encapsulated in frame relay packets using appropriate methods.
When a call is initiated, the EBTS converts the radio link to various land-line protocols depending on whether the call is Dispatch, Interconnect, or Packet Data. The DAP/HA-DAP then determines service availability and location information. The voice packets are duplicated and routed to the appropriate cell sites to support the group call. The HA-iHLR provides the databases where subscriber profile information is stored for Dispatch authentication.
The EBTS establishes and maintains the radio link on the RF control channels when a call is initiated. Once the radio link is established, the EBTS determines what protocol to use based on whether the call is Dispatch, Interconnect, or Packet Data. If the location of the SU indicates that another EBTS has better radio link quality, then the SU reconnects to the new EBTS without handover control overhead. The SU reselects and connects dynamically as required. The availability of the traffic channel in a specific area increases when additional BRs are added. With an increase in BRs, more Digital Access Cross-Connect Switch (DAC) and DAP/HA-DAP resources are needed to support the increase in radio link capacity
EBTS
The EBTS is the equipment through which the radio signal from the subscriber gains access to the system. This is also referred to as a site or cell site. EBTS also supports a power backup system.
DCS
The Digital Cross Connect Switch (DCS) is a standard, programmable telephone industry device that provides a transport system within the iDEN network. It is the central connecting point for ALL T1 and E1 links in the iDEN network and is used to separate the information on the T1 or E1 links and route the data to the correct network element. For Interconnect calls, the DCS crossconnects interconnect voice packets to the BSC/iBSC complex. The BSC/iBSC coordinates the movement of packets between the DCS and the MSC. For Dispatch calls, the DCS connects dispatch voice packets to the iVPU. The iVPU coordinates the movement of voice packets between the EBTS and the iVPUs. The iVPU, under the direction of the HA-DAP, routes voice packets to another EBTS if the Subscriber Unit (SU) reconnects. Transport facilities between the EBTS and DCS may use several interface types, including the following:
SR 16.0 05-Oct-07
iSC
The iSC controls the routing of information to/from the Base Radio(s), performs resource allocation, satellite tracking, and backhaul to the Mobile Switching Office (MSO). It routes interconnect, Dispatch, and Packet information to and from the MSO via a T1 or E1 trunk.
Base Radio
The BRs perform the RF communications with the SUs, while the EBTS provides the Radio Frequency (RF) link between the land network and the SU. BRs send both the control information and the compressed speech over a radio channel and packet data translations.
RFDS
The Radio Frequency Distribution System (RFDS) permits the combining of the transmit RF signal out of the BRs. It amplifies and distributes the receive signal to the BRs. The hardware consists of the Transmit Combiner Deck(s) and the Receive Multicoupler assemblies.
Overview
The Radio Access Network includes the EBTS which provides the RF interface for the SU. It also provides the T1 serial data interface between the iDEN Mobile Switching Office (MSO), which provides dispatch, interconnect, and packet data services. The EBTS also supports a power backup system. The EBTS is the equipment through which the radio signal from the subscriber gains access to the system. This is also referred to as a site or cell site.
SR 16.0 05-Oct-07
Figure 6-4
EBTS Cabinets
The EBTS Control Cabinet contains the network elements that are responsible for radio resource management. Radio resource management is the control, assignment, and monitoring of the radio channels and Time Division Multiple Access (TDMA) time slots of the radios under the control cabinets control. In addition to radio resource management, the control cabinet also routes data and voice information to the correct network element at the MSO and provides the interface between the rest of the EBTS equipment and the MSO. The RF Cabinet is a self-supporting structure, which contains various equipment modules including the following (see Figure 6-5 Radio Frequency Cabinet on page 6-12):
Base Radio Breaker Panel - the central location for power distribution and overload protection of the RF Cabinet. The circuit breakers provide an on/off control for these modules. They also act as automatic disconnect switches in the event of an electrical overload. Junction Panel (rear of cabinet) - provides a central location for cabinet grounding and intercabling. Access to the Junction Panel is gained from the rear of the RF Cabinet. RF Distribution System
Figure 6-5
SR 16.0 05-Oct-07
The following EBTS network elements are described in more detail below:
ACG
The Access Control Gateway (ACG) is the site controller and the communications gateway between an EBTS site and the MSO. The ACG manages its RF resources as well and reports performance data to the OMC which can be used for troubleshooting. Note The terms Integrated Site Controller (iSC) and Access Controller Gateway (ACG) can be used interchangeably to refer to the same component. The ACG uses either of two integrated Site Controller (iSC) platforms:
iSC3 - this is the current platform iSC2 - this platform is still used in the field but is not available in new EBTS shipments
The main differences between the two iSCs are the number of rack units (size/ height), processor speed and the number of T1/E1 and Ethernet ports. The ACG also allows the OMC-R operator to remotely switch over to the standby ACG.
iSC3
The third generation Site Controller (iSC3) is capable of supporting the advanced iDEN features that will be developed over the next several years. The GEN III Site Controller (see Figure 6-6 below) is compatible with the current generation of EBTS equipment. The iSC3 supports more RF carriers than the iSC2. iSC3 Front Panel
Figure 6-6
Two iSCs are recommended for redundant/standby operation. These units are software switchable (active-standby). Communication between the master (active) and standby is initiated by the active controller. This redundancy reduces maintenance and downtime. In the event of an EBTS failure, the iSC allows an OMC-R operator to perform tests to isolate faults between the EBTS and the facilities (T1/E1) and load new software. The functionality of the SRI, ELP, STP, and transient protection cards in the iSC2 is built into the motherboard of this newer version of controller. The iSC3 does not contain field replaceable units. The main differences between the two iSCs are the number of rack units (size/ height), processor speed and the number of T1/E1 and Ethernet ports.
iSC2
As with the latest version of the site controller, the iSC2 performs all control and synchronization functions for the EBTS. This includes radio resource management or the control, assignment, and monitoring of the radio channels and time slots of the radios under its control. Note Beginning with SR 15.0, the iSC will periodically monitor its CPU utilization. When it goes higher than certain thresholds, the iSC2 performs a set of actions to reduce CPU utilization. When CPU utilization is lowered and it exits the threshold level, normal processing is restored. All iSC2s will be required to have their RAM upgraded to 32MB. The OMC will not download new configuration files to iSC2s with less than 32MB. The iSC is also the interface between the MSO and the BRs. The iSC communicates with the MSO via either a T1 or E1 interface, and with the BRs via an Ethernet LAN. There is usually a redundant iSC that will control the site in the event the primary iSC fails. The iSC2 consists of the following modules (see Figure 6-7 iSC2 - Front and Rear Views on page 6-15):
Ethernet LAN PCI (ELP) - Provides IEEE 802.3 10Base2 Ethernet interface between the Controller and the BRs. Subrated T1 PCI/subrated E1 PCI card (STP/SEP) - Provides the site network interface between the Controller T1/E1 span. Integrates the Channel Service Unit (CSU) functionality used in other systems. Site Reference ISA card (SRI) Provides EBTS with accurate frequency reference (5 MHz) and timing information (1 PPS).
SR 16.0 05-Oct-07
Serial/Parallel Transient protection card Provide transient protection for the serial/parallel data I/O. The Serial/Parallel card is the connection point for the RS232 and IEEE 1284 parallel connection between the Controller and the iMU. T1/E1 Transient protection card Provides physical interface to the network (MSO) and conditions the signals. The T1 is first routed through the transient protection circuitry before being processed by the STP card. The circuitry protects the STP card from surges on the T1 line. iSC2 - Front and Rear Views
Figure 6-7
Most systems are configured with two Controllers (a main and a standby) and an iMU.
Both the iDEN Monitor Unit (iMU) and the Environmental Alarm Systems (EAS or EAS2) receive alarm signals from various EBTS site equipment and sends them to the iSC for processing. The alarm signals can come from BRs, the RFDS, circuit breaker status signals, as well as customer-defined alarms such as AC power, smoke alarms, intrusion alarms, high/low temperature alarms, and antenna tower lights. The EAS monitors EBTS site alarms and reports the alarms to both iSC3s through the parallel ports on the back labeled Controller A and Controller B. The EAS is sometimes referred to as the IMU II. The control cabinet contains either an iMU or an EAS depending on the type of iSC that is installed.
RFDS
The RFDS is the frequency combiner that allows several BRs to share a common antenna system. The EBTS can be configured with either hybrid or cavity combiners. This creates minor changes in maintenance and operation but does not affect the functionality of the radio link. The EBTS site antenna configurations will vary with each installation. Each EBTS cell requires RF antennas for TX and RX signals. Site antenna configurations include three RX antennas for each omni or sector RF coverage area. Sites may be configured with a separate TX antenna for each omni or sector RF coverage area, but typically the site antennas are configured so that the TX signals are applied to one or more of the RX antennas. Therefore a site may have either four or three antennas for each omni or sector RF coverage area. There are several different ways to configure the RFDS depending upon the number of BRs and the antenna configurations. The following are the types of RFDS used in the EBTS site:
Duplex RFDS - is a broadband, three-antenna RF distribution system. It allows a transmit (TX) and a receive (RX) path to share a common antenna. A total of three duplexers reside within this distribution system. (See Figure 6-8 Duplex RFDS on page 6-17.) Cavity Combining RFDS - is a narrowband, three or four antenna, nonduplexed RF combining system. It consists of two or three receive antennas and a single transmit antenna. (See Figure 6-9 Cavity Combining RFDS on page 6-17.)
SR 16.0 05-Oct-07
Figure 6-8
Duplex RFDS
Figure 6-9
Base Radio
The BR provides reliable digital capabilities via a compact softwarecontrolled design. Increased channel capacity is achieved through voice compression techniques and time division multiplexing. The BR is mounted below the RF Distribution System in the RF Cabinet. (See Figure 6-5 Radio Frequency Cabinet on page 6-12.) A BR can be removed from the EBTS and replaced with a new BR without taking the site off the air. The BRs and the radio link traffic (voice and data) are controlled by the ACG/iSC over the Ethernet LAN. BRs are either single-channel units that provide a single 25 KHz RF carrier in one physical unit or a QUAD Channel BR that supports up to four 25 KHz RF carriers in one physical unit. The QUAD BR can be software-configured from the OMC-R for 1 to 4 carriers. Each BR consists of the following components:
Base Radio Controller (BRC) - controls the BR RF transmitter (TX) and receiver (RX). The BRC has a 10Base2 Ethernet interface for communication with the ACG/iSC. Exciter - provides low level TX RF output signal to drive the RF Power Amplifier RF Power Amplifier (PA) - amplifies the low level TX RF signal from the exciter. The RF PA provides the final TX RF power output to the RFDS and TX antenna. Receiver - contains three receivers that provide RX antenna diversity Power Supply - Provide DC voltages for various BR modules
EBRC These can only support a single 25 KHz RF carrier. 900 MHz Quad BR These are assigned 900 MHz frequencies whenever possible for interconnect traffic. Call setup, and control will occur in the 800 MHz band. Quad BR These can support four 25 KHz RF carriers on one physical BR and utilizes the same platform as the single channel BR. It can be software-configured from the OMC-R for from 1 to 4 carriers. Quad 2 BR These provide customers with the flexibility to deploy either 800 MHz or 900 MHz frequencies with the same hardware platform and reduce the number of BR Field Replaceable Units (FRUs) to be maintained. To support the extra memory requirements for a Quad2 BR image, a Quad2 BR is supported only on sites with an iSC3 or an iSC2 with a memory upgrade.
SR 16.0 05-Oct-07
OMC-R
As with other iDEN network elements, the OMC-R provides the capability to remotely monitor and maintain the EBTS, providing the network operator a means of loading, configuring, and isolating faults within the iDEN network. It also provides support for performance and security management. The OMC-R receives events and alarms to help isolate problems. Configuration Management allows for parallel BR downloads. This allows simultaneous software downloads to each BR in the EBTS and reduces downtime. Statistics are sent to the OMC-R every 30 minutes. Alarms, state events, and faults are sent as they occur.
MMI Commands
As with other iDEN network elements, communication with the system is through the use of Man Machine Interface (MMI) commands and a service computer. MMI commands provide testing capabilities, access to alarm log files, and various diagnostic tests. MMI commands also provide a means to configure the control and RF cabinets for various system tests.
Status Indicators
The following status indicators support the EBTS operations and maintenance functions:
EBTS Indicators - the Front Panel Display Card provides a means for monitoring operation of the EBTS equipment. iSC - the front panel display card provides a means of monitoring operation of the Controller. These LEDs provide information on the status of the iSC Controller, the network, the Subrated T1/E1 PCI card, and Site Reference ISA card.
BR - the BRC monitors the functions of all the modules in the BR. The LEDs on the front panel indicate the status of the modules monitored by the BRC. A solid red LED indicates hard failures. Minor failures are indicated by a flashing red LED. iMU/EAS - three LEDs are provided to monitor the status of the iMU. Two LEDs, Input Active and Output Active, are used to indicate the status of the EAS input alarms and relay outputs, respectively. Also, one LED is used to indicate that the iMU is powered on.
Event Logs
As with other iDEN Network Elements, the OS maintains a system log file of events, including changes of state and failures of hardware Customer Replaceable Units (CRUs). Status messages are reported to the OMC where they are received and stored for historical reference. These status messages may be used to aid in system troubleshooting and diagnostics. To use the system event log for diagnosing problems, review log entries that were recorded over an interval of time, such as several hours or even longer. It is also possible to review the history of a resource or CRU using the information from system messages and hardware message logs.
Load Shedding
The load shedding mechanism involves memory usage, CPU usage, and possibly queue sizes as a trigger. Alarms will be used to indicate when the trigger points are reached. The load shedding algorithm uses three major priority categories, including:
Maintenance messages (including messages related to the Resource Allocation Group (RAG) management and fault management) Fault management Location tracking of the Subscriber Unit (SU)
SR 16.0 05-Oct-07
Chapter 7
Interconnect Subsystem Details
In This Chapter Topic See Page
Purpose. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-4 Interconnect Subsystem Description . . . . . . . . . . . . . . . . . . 7-5 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-5 Interconnect Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-5
Next Generation Dispatch (NGD) Configurations........................... 7-5 NGD Features ................................................................................ 7-6 Mixed Mode configuration .............................................................. 7-7 Split Mode ...................................................................................... 7-8 All Frame Relay................................................................. 7-8
MLC Hardware Description . . . . . . . . . . . . . . . . . . . . . . 7-31 MLC Software Description . . . . . . . . . . . . . . . . . . . . . . . 7-31 MLC Network Management . . . . . . . . . . . . . . . . . . . . . . 7-32 Interworking Function (IWF) . . . . . . . . . . . . . . . . . . . . . . . . . 7-33 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-33
References ................................................................................... 7-33
IWF Hardware Description . . . . . . . . . . . . . . . . . . . . . . . 7-34 IWF Software Description . . . . . . . . . . . . . . . . . . . . . . . 7-36 IWF Network Management . . . . . . . . . . . . . . . . . . . . . . . 7-36 SuperNode Data Manager/Fault Tolerant (SDM/FT) . . . . . . 7-37 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-37
References ................................................................................... 7-37
SR 16.0 05-Oct-07
Purpose
This chapter describes the major network elements of the Interconnect subsystem in terms of their key hardware components, software function, and network management facilities. Each key component is explained, including its key functions, hardware and software descriptions, and the network management.
SR 16.0 05-Oct-07
Interconnect Overview
Interconnect calling, similar to cellular telephone calling, allows a Subscriber Unit (SU) to travel freely throughout an iDEN system and communicate with other subscribers and landline systems. Within this extended network, a user can originate or receive interconnect calls. The system tracks the location of the SU, the services available to the SU, and the routing of calls to/from the SU as it travels throughout the system. With networked regions the SU can roam into those interconnected regions originating and receiving calls as if the SU is in its home system. The major Network Elements involved in an Interconnect call are:
SU (Subscriber Unit) - sends and receives voice data EBTS (Enhanced Base Transceiver System) - converts the radio link to the land link and discriminates between interconnect and dispatch calls BSC/iBSC (Base Site Controller / iDEN Base Site Controller) - routes interconnect packets between the EBTS and the MSC MSC (Mobile Switching Center) - determines Interconnect services and location information and also controls and routes the calls to other providers IWF (Inter-working Function) - provides circuit switched data services SMS (Short Message Service) - provides text message services VMS (Voice Mail Server) - serves as an answering machine and stores messages for the SU. It also indicates when voice mail is present. SDM/FT (SuperNode Data Manager/Fault Tolerant) - part of the Call Intercept System (CIS) that allows law enforcement agencies to obtain call data records as well as intercept audio in an Interconnect phone call for court authorized monitoring.
The Next Generation Dispatch provides higher capacity and greater system reliability in a significantly smaller footprint. It supports All Frame Relay (AFR), Split Backhaul, or a combination of both configurations. (See Figure 7-1 NGD System on page 7-6.)
Figure 7-1
NGD System
NGD Features
The main changes to the iDEN network with the NGD system are:
iVPUd - iVPU hardware operating with only Dispatch and Packet Data functions. (There is no Interconnect transcoding.) The iVPUd supports Split Mode Backhaul EBTSs only. iVPUdi - Software upgrade to support Dispatch and Packet Data functions in addition to Interconnect transcoding. Supports Split Backhaul and AFR Backhaul EBTSs. iVPUi - Interconnect transcoding. Software upgrade of iVPUs in the field default to this mode. Supports Split Backhaul EBTSs only.
One iVPU software load supports the following: Interconnect Transcoding Dispatch Intra-Urban Dispatch Audio Routing and Duplication (APD) Dispatch Inter-Urban Dispatch Audio Routing and Duplication (iDAC) Frame Relay and IP Inter-working
The NGD architecture supports various configurations, including: Mixed Mode Split Mode All Frame Relay (AFR)
SR 16.0 05-Oct-07
Between Legacy and Next Generation Dispatch architecture, the Mixed Mode network is considered a transitional phase. Mixed Mode is a system configuration that is characterized by a mixture of Legacy network elements (e.g., BSCs, DAPs, APDs, iDACs) and new NGD network elements (e.g., HADAPs, iVPUs). (See Figure 7-2 below.) In the NGD configuration, an iVPU can be configured as:
iVPUi (for Interconnect only) iVPUd (for Dispatch only) iVPUdi (for Interconnect or Dispatch)
This depends upon the system configuration. In the mixed mode phase of the iDEN architecture, the network elements that are required for Next Generation Dispatch are migrated into the Legacy architecture. These network elements can coexist with the legacy network elements. The difference between the iVPUi and iVPUdi is that the iVPUi only supports interconnect for split backhaul sites whereas the iVPUdi supports interconnect for both split backhaul and AFR backhaul sites, along with dispatch and packet data services. Figure 7-2 Mixed Mode Configuration
Split Mode
The NGD network also supports the Split Mode site configuration which is characterized with Mobis, SNMP, and interconnect voice traffic being carried by circuit switch channels and dispatch traffic being carried by a Frame Relay channel. The Mobis and SNMP channels must terminate on an iCP or legacy BSC. Interconnect Voice traffic can terminate on either a legacy XCDR, iVPUi, or iVPUdi. The Frame Relay traffic can terminate on a legacy DAP, MDG, iVPUd, or iVPUdi. (See Figure 7-3 below.) Figure 7-3 Split Mode Configuration
All Frame Relay AFR is supported in both Mixed Mode and in the final NGD configurations. AFR trunks carry signaling, interconnect, dispatch and packet data traffic over Frame Relay. (See Figure 7-1 NGD System on page 7-6.)
SR 16.0 05-Oct-07
References
The following is a list of technical documentation and training courses that support the Legacy BSC:
Interconnect Fundamentals Course (FND105) Interconnect Operations & Maintenance Course (OPM207) iBSC Operations & Maintenance Course (OPM214) iDEN System Recovery Topics Course (ADV401) Interconnect Troubleshooting Guide (6881012Y80) iBSC Hardware Manual (68P1001Y77) iBSC Health Check (HC04003) Legacy BSC to iBSC Reference Guide (WP 04008)
These documents are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com unless otherwise stated.
iBSC Overview
The iBSC is a network element in the Interconnect portion of the Mobile Switching Office (MSO) of the iDEN system. The iBSC presents technology that increases the Erlang capacity per square foot, increases availability, and addresses cost concerns for maintaining legacy platforms due to end-of-life issues. See Figure 7-4 iBSC in the iDEN Network on page 7-10.
Figure 7-4
The iBSC consists of an iCP (iDEN Call Processor) and two iVPUis (iDEN Voice Processing Unit(s). Note that the iVPUi is the same element with the same capabilities as the legacy iVPU. (See Figure 7-5 iBSC Architecture on page 7-11.) Only the name has been changed to identify it as an Interconnect element. The iCP controls the signaling necessary to set up, maintain, and tear down calls and provides handover support and control in Mobility Management. The iVPUi controls the voice traffic. This includes compressing voice traffic being sent to the mobile while decompressing voice traffic being sent from the mobile. It transcodes between VSELP/AMBE to PCM audio, provides subrate allocation, and so on. With the rollout of SR14.0 and later, the iVPU becomes more important to the overall iDEN network. It is a configurable network element. It continues to support Interconnect and can now support Dispatch and Packet Data.
SR 16.0 05-Oct-07
Figure 7-5
iBSC Architecture
With SR15.0 and later, the iBSC supports a maximum of 384 split-mode sites. The maximum number of All Frame Relay (AFR) sites supported has not increased. As in earlier releases, up to 176 AFR sites are supported. (See Figure 7-6 below.) Figure 7-6 iBSC Coverage
iCP Function The iCP performs the same functions as in legacy networks, including:
Controls the signaling necessary to set up, maintain, and tear down interconnect calls Provides handover support and control in Mobility Management Conversion of radio link to land network format Control Transcoder audio
iVPU Function The iVPUi also performs the same functions as in legacy networks, converting the VSELP or AMBE++ voice packets used on the radio link to the Pulse Code Modulation (PCM) used by local and interconnected Public Switched Telephone Networks (PSTN). The iVPU also performs:
Subrate allocation EBTS span audits Audio path setup Vocoder bypass for mobile-to-mobile calls of the same type, TTY Baudot, and Circuit Data Calls
The iVPU performs transcoding between VSELP/AMBE++ to PCM audio, subrate allocation, EBTS Span audits, audio path setup, Vocoder Bypass for Mobile-to-Mobile calls of the same type, TTY Baudot and Circuit Data Calls.
iCP refers to 2 Unix Nodes and 1 I/O cage in an Active-Active (with one of the Nodes as Supervisor) Configuration iCP Node: refers to 1 of the 2 Unix Nodes which comprise the iCP iVPUi refers to a single iVPUi shelf with a maximum of 15 BPP2 cards (BPP2 refers to the transcoder card).
SR 16.0 05-Oct-07
The iBSC may co-exist on the same MSC and OMC with legacy BSC equipment, however the iCP cannot be paired with a legacy EXCDR, nor can an iVPUi be paired with a legacy CP. iBSC Hardware
Figure 7-7
iCP Components
One I/O shelf provides for the termination of the T1/E1 spans carrying the SS7, MOBIS, and SNMP Two Server Nodes manage the call signaling and voice paths between the EBTS and the iVPUi or iVPUdi (load balanced)
See Figure 7-8 iCP Frame Layout on page 7-14 for details. Figure 7-8 iCP Frame Layout
SR 16.0 05-Oct-07
iCP Input/Output Shelf The iCP I/O shelf (see Figure 7-9 iCP I/O Shelf on page 7-15) consists of the following:
The iCP I/O shelf provides termination of the T1/E1 spans hosting the MTL, Mobis RSL, and SNMP RSL links. The internally packet switched I/O shelf consists of a management Blade, redundant (2) switch Blades, each capable of terminating up to 8 T1 or E1 spans. Can have between 2 and 6 I/O Blades; 2 for SS7 and 0 to 4 for site traffic (LAPD). The I/O shelf supports dual -48VDC power feeds. All I/O shelf cards are hot swappable. T1 Termination Solution for Dual Node iCP Switch Cards - Redundant Ethernet Connectivity (to iCP nodes) MGMT card Intelligent Platform Management Interface (IPMI) Support T1 cards - SS7 and LAPD Protocol Support iCP I/O Shelf
Figure 7-9
iCP Boards
The iCP contains the following boards (see Table 7-1 iCP Board Functions on page 7-16):
Alarm Management Controller (AMC) IP Switch cards IO Blade (also known as Payload cards)
Hot-swappable boards provide for N+1 redundancy, and are used to gather, control, and monitor systems status over the IPMI bus. A complete 24 10/100 and 2 Gigabit Ethernet switching subsystem that performs all switching functions with a throughput of up to 6.6 million packets-per-second. Processes either LAPD (Link Access Protocol on D-Channel) or SS7 traffic which is terminated at the Rear Transition Module.
IP Switch card
I/O Blade Cards With SR 15.0, the number of RSL cards may range from 0 to 4. Each RSL card may be installed in any slot between 3 and 9. (See Figure 7-10 below.) Also, as part of SR 15.0, the RSL board and slot ID must be configured from the OMC. Local movement of the RSL cards via the LMT is no longer supported. Note If all EBTS sites are connected through AFR trunks, no RSL cards are required. The MTL2 cards remain installed in slots 1 and 2. The number of cards or configuration of MTL2 cards has not changed.
SR 16.0 05-Oct-07
Table 7-2 indicates the number of RSL cards required to support a range of split mode sites.
Table 7-2 Split Mode RSL Card Configuration # of Split Mode # of Required RSL Cards Sites
0 (all AFR sites) 1 - 96 97 - 192 193 - 288 289 - 384 0 cards 1 card 2 cards 3 cards 4 cards
iVPU Components
The iVPU frame houses two iVPU cages, each independent of one another. Each cage is an integral product, which houses a common platform and iVPUspecific Circuit modules. A midplane provides the communication between all modules. The iVPU frame has the following major equipment areas:
Top of cabinet I/O panel DC power distribution iVPU cages (lower and upper) front and rear
An iVPU frame is equipped with two iVPU cages is populated with cards depending upon the configuration (1 iCP connected to 2 iVPUs), and the voice processing capacity required. (See Figure 7-11 iVPU Components on page 7-18.) Expanding iVPU capacity is done at the iVPU cage level. This can be done by adding more Bearer Payload Processor (BPP2) cards for more voice processor circuits, as well as at the iVPU pool level, by adding 2 iVPU cages along with the iCP.
SR 16.0 05-Oct-07
iVPU Front Cage Boards The iVPU front cage contains the following cards:
Circuit Breaker Module (CBM) Power Supply Module (PSM) Customer and Cabinet Alarm (CCA) card Interface Switch Board/System Processor card (ISB/SPROC) Bearer Payload Processor2 (BPP2) card Fan Module
Figure 7-12 provides the layout of the boards and modules in the front cage of the iVPU. Figure 7-12 iVPU Front Cage
The Figure 7-13 iVPU Rear Cage on page 7-20 provides the layout of the boards in the rear cage of the iVPU. Figure 7-13 iVPU Rear Cage
The following table (Table 7-3) provides the iVPU board descriptions: Table 7-3 iVPU Board Functions Board Name Function
CBM PSM CCA Contains seven 20A Circuit Breakers that provide current overload protection. A DCDC converter Payload card that provides the alarm and status monitor and control interface between the ISB/ SPROC and other iVPU plugin cards/modules. Implements the Hub connecting the SPROC, network interface, and all payload cards, and supports O&M and centralized call processing activities in the iVPU cage.
ISB/SPROC
SR 16.0 05-Oct-07
DGBE
DOC3
IP Footprint Reduction The IP Footprint Reduction (IPFR) feature in SR 15.0 significantly reduces the number of IP network addresses required by the iVPU. It impacts all versions of the iVPU, including iVPUi, iVPUdi, and iVPUd. The iVPUs enabled with the IPFR feature enable the use of the following IP addresses:
Each ISB is assigned an external IP address The external logical SPROC is assigned an IP address
The links for the legacy BSC and iBSC are virtually the same. These links are Mobis for call set-up and tear down to and from the radios, SS7 for call set-up and tear down to and from the switch, and PSTN and SNMP for operational management to and from the EBTS sites and the OMC. See Figure 7-14 iBSC Links and Interfaces on page 7-22 for more detail.
In addition, coded voice to and from the radios and PCM to and from the switch will be sent over wire from the radios and over fiber to the MSC. Links between the call processors and voice processors support control and maintenance functions that are carried over wire or are delivered in packets on a LAN. Figure 7-14 iBSC Links and Interfaces
The iVPU Rolling Upgrade (RUP) feature enables the user to upgrade an iVPU to a new software release with zero downtime. This means that the platform will remain in service and continue to process calls. However, individual cards will be taken out of service, which will result in a loss of capacity.
SR 16.0 05-Oct-07
A key prerequisite to a successful rolling software upgrade is compatibility between the running software and the new software. The OMC-R should not allow initiation of a rolling software upgrade unless the two releases are compatible. RUP was introduced in SR13.4 for an iVPU configuration and the same guidelines apply for the iVPUi mode:
As FRUs are software upgraded, the iVPUi continues to provide interconnect service. However, calls on the BPP2 that are upgraded would be interrupted. It is recommended that upgrade is performed during the maintenance window to reduce end user interruptions.
2N redundant boards are upgraded one at a time. These include the following:
Upon transition of the ISB/SPROC active state, a momentary loss of IP connectivity may effect interconnect calls that are in transition, such as call setups and handovers.
Supports iCP pair related configurations The iCP and iVPU are included in the OMC System Status Display The iCP does not take software downloads from the OMC
Software upgrades are performed via a CD ROM drive resident on each iCP node Application level patches may be sent to the iCP via FTP
iCP Configurations
Download configuration data Store and upload statistics Perform time synchronization with the OMC-R Report alarms and state change events to the OMC-R
Required iCP configuration changes should be applied to both nodes at the same time. Configuration changes that are not on-line changeable should be performed one node at a time to avoid an entire iCP outage.
Each iCP Node is managed independently and includes the following support:
Treated as a separate NE Managed by the same OMC-R Supporting all applicable configuration Fault Performance Time LMT functionality
iCP Firmware Downloads Effective with SR 15.0, the LMT operator may upgrade the iCP firmware using two new commands: io_upgrade and io_versions. The io_upgrade command performs a firmware upgrade for a specific iCP card or fan. You must specify the iCP card slot or fan number. The io_versions command displays the shelf and card software versions and status.
OMC-R
The Operation and Maintenance Center - Radio (OMC-R) performs the following functions on the iBSC (and other iDEN network elements):
Configurations Upgrades Optimizing Monitoring Reports alarms and state change events
SR 16.0 05-Oct-07
The Online Configuration Change (OLCC) feature allows network operators to change the iDEN system configuration while the system continues to process calls. The OLCC provides the capability to change the iDEN system configuration while network elements are online processing calls, thereby reducing the overall scheduled configuration change outage time for the iDEN network. It also provides the mechanism to change the allocated system parameters without re-initializing the network element off-line. OLCC Display and Logging The OLCC changes being applied will be displayed at the LMT when the OLCC switch occurs. The message will describe the OLCC add/modify/ delete action and whether it was successful. In addition, the OLCC changes will be placed in the iVPU customer log.
PDR Warning
The Packet Data Region (PDR) Warning is a new message that indicates a potential service problem. If an operator connects a new EBTS site to an iBSC and a packet data region is controlled by different OMCs, packet data service will not work. In the case where a new site is associated with an iBSC and DAP is mapped to different OMCs, the interconnect service is supported but packet data service is not supported. The PDR warning message prevents this from occurring.
Admin LMT users may create and maintain user accounts on each iVPU. In addition, iVPUs can propagate their user accounts to other iVPUs. One iVPU is selected as the LMT user account master for the urban. The account master iVPU can synchronize the user account files for all iVPUs in the urban. In addition to its user accounts file, the LMT account master knows the IP addresses for all iVPUs in the urban.
Virtual Router Redundancy Protocol (VRRP) supports high availability for IP hosts in a LAN. Similar to Hot Standby Router Protocol (HSRP), VRRP creates a simple failover capability for a VRRP group. iVPUs support Open Shortest Path First (OSPF) and VRRP for high availability. Either technique may be configured in each iVPU.
References
The following is a list of technical documentation and training courses that support the MSC:
These documents are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com unless otherwise stated.
MSC Functionality
The MSC is a GSM-based mobile telephone switch that provides Interconnect services by serving as an interface between the mobile network and the PSTNs of other service providers. Each MSC provides service within a geographic coverage area, and a single iDEN urban may contain more than one MSC. The MSC is a Nortel Digital Multiplex System-100 Family product. The MSC can function as a landline switch, a wireless switch, a gateway switch, SS7 signaling controller, as well as a database controller. The switch is generic in nature and is dependent upon the various software packages used to determine its function. The MSC has basic functionality as well as specific functions in the iDEN system (see Figure 7-15 MSC in the Interconnect Subsystem on page 7-27). These include:
Speech - the information sent by the customer from one location to another. The information is only passed through a peripheral module and the network. Message - the internal information sent by the entire DMS (Digital Multiplexing Switch) switch for call setup. It is often referred to as message links. Signaling - the information sent between Central Offices and the BSC/iBSC and EBTS for digit transmission and call details for billing.
SR 16.0 05-Oct-07
Control and interface to the PSTN Call processing for Interconnect calls Echo cancellation for voice calls (with associated equipment) Provisioning of subscriber supplementary services Authentication of subscriber units Intra-system roaming and/or handover between BSCs/iBSCs Inter-system roaming and/or handover between MSCs Billing record collection Interface to a customer-supplied billing system Control of IWF for data networking Interface to the Voice Mail System
Switch Matrix This is a set of digital cross connections between input and output lines that represents the path to be followed for a particular call setup. The design of the matrix uses E1 style connections. Up to 60,000 lines may be attached to the switch matrix. Core Processor The Core Processor is a CPU that interprets signaling from the external PSTN and the iDEN network equipment to identify, authenticate, service, and route call signaling and traffic in the MSC. It also communicates with other intelligent nodes in the iDEN network to coordinate the iDEN network-wide setup of communications circuit paths between mobile users and land telephone users. Digits dialed by land users or iDEN mobile users are ultimately used by the core module to make the connection the user desired.
SR 16.0 05-Oct-07
Facilities Interface Both iDEN system span line and the PSTN transport facilities (T1 and/or E1) are connected to the MSC through Digital Trunk Controllers (DTCs). The DTC is the interface between the MSC, the iDEN system, and the external world. T1 or E1 telephony connections may be hardwired to the DTC. The Signalling and data span lines of the T1/E1s are split off and connected to the switch matrix for telephony data and the LLP for signal processing. The DTC cards may be one of two types: T1 for North American markets and E1 for International markets. Specialty markets are also supported with other tailored DTC cards.The exact number will depend on the market and system configuration as specified by planning and capacity. The DTC is the interface between the external T1/E1 trunks, the Core Processor, and the Switch Matrix. The Spectrum Peripheral Module (SPM) is an upgraded external interface used to interface OC-3 fiber optic trunks. DTCs and SPMs can co-exist. Signal Interface The Link Peripheral Processor (LLP) is the signaling interface of the MSC. All interconnect signaling from the PSTN and the iDEN system passes through the LLP. Home Location Register The Home Location Register (HLR) is the location where every Subscriber Unit's permanent subscriber records are stored. The HLR may be implemented as part of the MSC or as a separate computing system. All SU identities and the various supplementary services are provisioned in the HLR. The HLR performs Subscriber Access Control. It is queried each time an interconnect call is initiated or interconnect call features are requested and the VLR does not contain an entry. The MSC manages access to the system by verifying requests for service against a database of subscriber privileges. The HLR database also contains Mobile Station Identification data and Fixed Network Data. Visiting Home Location Register The Visitor Location Register (VLR) is the location and activity database of the MSC, containing the IDs and the most recent location information on each iDEN Subscriber Unit (SU). The core processor reads this location register database to check SU authorization and location information it requires for call setup.
The VLR is a fast-access database storing data about the SU units that are now or have been recently active. It is a fast look up for Interconnect calls, permissions, and services. The VLR speeds call setup because the entire HLR subscriber database that is located on disk is not searched.
SR 16.0 05-Oct-07
References
Technical documentation that supports the MLC is available from the vendor.
7
consists of software and/or hardware offering location service capabilities accepts requests, services requests, and responds to the received requests consists of location service components which are distributed to one or more Public Land Mobile Networks (PLMN) and/or service providers.
Gateway Mobile Location Center (GMLC) Serving Mobile Location Center (SMLC)
These network elements can be deployed separately or in combination. The MLC uses the Nortel Networks Universal Signaling Point (USP) to communicate with the SS7 network. The MLC processing units are SUN Netra units, using the Solaris operating system on a SPARC platform.
CORBA (Common Object Request Broker Architecture) - Input to the MLC database can use a CORBA interface that includes methods for reading and writing subscriber data. CORBA is a machine-to-machine interface with a well-defined protocol. SNMP (Simple Network Management Protocol) - SNMP is an industrystandard interface for fault management and performance monitoring.
XML (Extensible Markup Language) - XML is the universal format for structured documents and data on the internet. It is a set of guidelines for designing text formats for producing files that are easy to generate and read. It is similar to HTML. Sun Microsystems Solaris - The Solaris Operating Environment is acknowledged by the industry to be the premier UNIX environment for SPARCTM and Intel Architecture systems. SDK (Software Development Kit) - The SDK provides information needed to develop location-based applications that can interface to the MLC. The purpose of the SDK is to provide developers with a tool kit, which allows development and testing of applications without interfacing directly with the MLC.
SR 16.0 05-Oct-07
References
The following is a list of technical documentation and training courses that support the IWF:
Interconnect Fundamentals Course (FND105) Interconnect Operations & Maintenance Course (OPM207) IWF Installation Manual (68P80800E10) IWF System Release Implementation Manual (68P80802B05)
These documents are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com unless otherwise stated.
IWF Overview
The Inter-Working Function (IWF) performs the data-rate adaptation between the PSTN and the iDEN system. The IWF provides a modem bank that allows data transmitted from data devices on an external network to access SU and RF modems. The IWF terminates the PCM digital data format such as Teletype (TTY), facsimile and dial-up networking. Consisting of hardware and software elements, the IWF (Inter-working Function) platform has the following functionalities (see Figure 7-17 Interworking Function Services on page 7-34):
Provides circuit switched data services Serves as a translation and conversion point Allows an end-to-end connection between an SU and a remote device such as a dial-up modem Provides required rate adaptation between the SU and PSTN or between two SUs
This allows subscribers to connect a laptop computer or fax machine directly to an iDEN SU to provide wireless modem and fax data capabilities
Radio Protocol Radio Link Protocol (RLP) is the method of transferring compressed character data between the Mobile Subscriber and the MSC-IWF during Circuit Switch data networking.
48/60 ports IWF consist of 2 nodes per chassis 24/30 ports IWF consist of 1 node per chassis
See Figure 7-18 IWF Front View on page 7-35 for the front view and Figure 7-19 IWF Back View on page 7-35 for a view of the back of the IWF.
SR 16.0 05-Oct-07
The signaling interface between the MSC and the Interworking Function (IWF) occurs through Ethernet Interface Units (EIU) in the Link Peripheral Processor (LPP) in the MSC. The IWF functions can be divided into two elements:
Network Inter-working Trunk (NIT) - Communications from the Network to IWF Network side supports connection between a fixed modem and IWF element
Mobile Inter-working Trunk (MIT) - Communications from the SUs to the IWF Mobile side supports the iDEN protocol connection to the SU
The IWF function can be split into two separate sections, the mobile side and the network side. The mobile side is referred to as the Mobile-side IWF Trunk (MIT), while the network side of operations is referred to as the Network-side IWF Trunk (NIT). MITs and NITs are configured to work as an element within the circuit switch function.
SR 16.0 05-Oct-07
References
The following is a list of technical documentation and training courses that support the SDM/FT:
Interconnect Fundamentals Course (FND105) Interconnect Operations & Maintenance Course (OPM207) Understanding iDEN System Fault Tolerance - Interconnect subsystem (WP2002-021)
These documents are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com unless otherwise stated.
SDM/FT Overview
The SDM/FT (most commonly referred to as SDM) manages data formatting and provides a point of access to the switch. This access allows Interconnect surveillance, customer provisioning, and billing. The SDM/FT provides a surveillance capability in the iDEN Interconnect subsystems. Surveillance of communications may involve delivery (to law enforcement) of a combination of:
Call identifying information Call content information (digital voice) Location information
SuperNode Data Manager (SDM) - provides the point of access for a law enforcement agency to log into the system and obtain interconnect voice data. Call Intercept Provisioning Center (CIPC) - provides a web interface to provision surveillance information. Once the information has been submitted, the CIPC informs the SDMs of the presence of the new warrant. Also as a part of the warrant provisioning, the CIPC provides the surveillance ID and password for the law enforcement agency to gain access to the surveillance.
Law Enforcement Monitor (LEM) - provides the ability to log into the SDM/FT with a surveillance ID and password, then collect and store the audio and data about the calls under surveillance. Once those are stored, the LEM must be able to decode the audio and data into some format usable by the agency in court.
Monitored Events The call content of the intercepted call is delivered to one or more monitoring centers through PET7 (ANSI/ETSI-ISUP) trunks from the DMS-MSC. The MSC selects an idle trunk member from the observation group, and sets up a listen-only bridge connection to the Monitoring Center. The call data of the intercepted call originates at the MSC or HLR. The data is sent to the attached SDM, where the data records are formatted according to the legal requirements of the market area. It is then sent to the Monitoring Center as Call Data Records (CDRs) over X.25 or Ethernet data lines.
SR 16.0 05-Oct-07
OMC-R
OMC-R is responsible for configuring and monitoring the SDM/FT using MMI commands to control the specific functions.
MMI Commands
The Man-Machine Interface (MMI) provides a graphical user interface using the Airgen application software. The functions include the following:
Automatic configuration parameters validation to catch errors that might cause an outage The capability to perform network management functions remotely, such as remote configuration downloads, remote statistics uploads, and remote switch to a standby unit for faulty network elements Remote login to access other network element and perform diagnostics and failure recovery procedures
The Call Intercept System (CIS) management is handled by the Call Intercept Provisioning Center (CIPC). The CIPC performs the following functions:
NOTES...
SR 16.0 05-Oct-07
Chapter 8
Dispatch Subsystem Details
In This Chapter Topic See Page
SR 16.0 05-Oct-07
LMT .............................................................................................. 8-32 ELMT............................................................................................ 8-33 Activity Log Files .......................................................................... 8-33
Health Check................................................................................ 8-48 Ping and TraceRoute ................................................................... 8-48 Restricted Access Key ................................................................. 8-49 Debugging Data Collection Tool................................................... 8-49 Data Collection Time Interval ....................................................... 8-49
SR 16.0 05-Oct-07
Purpose
This chapter describes the Network Elements (NEs) that comprise the Dispatch subsystem in the Integrated Digital Enhanced Network (iDEN) network. Each NE is documented in a functional description. Each functional description includes information on the key hardware components, software function, and network management facilities. The following Dispatch NEs are supported in the Next Generation Dispatch (NGD) architecture:
High-Availability Dispatch Application Processor (HA-DAP) High-Availability iDEN Home Location Register (HA-iHLR) iDEN Surveillance Gateway (iSG) iDEN Vocoder Processing Unit (iVPU) iDEN Vocoder Processing Unit for Dispatch (iVPUd) iDEN Vocoder Processing Unit for Dispatch (iVPUdi)
The following legacy Dispatch NEs are contained in this chapter for reference until the NGD architecture has been fully deployed in each service providers network.
Dispatch Application Processor (DAP) iDEN Dispatch Access Controller (iDAC) Advanced Packet Duplicator (APD) Metro Packet Switch (MPS)
SR 16.0 05-Oct-07
Split Mode
Split Mode consists of a site configuration with Dispatch traffic carried by a Frame Relay (FR) channels, and Interconnect voice traffic carried by circuit switch channels. In Split Mode, Dispatch FR traffic terminates on a legacy DAP, Mobile Data Gateway (MDG), iVPUd, or iVPUdi. While Interconnect voice traffic terminates to either a legacy XCDR transcoder, iDEN Vocoder Processing Unit for Dispatch (iVPUd), or iDEN Vocoder Processing Unit for Dispatch and Interconnect (iVPUdi).
Mixed Mode
Mixed Mode architecture is comprised of two different stages, Mixed Mode Stage One, and Mixed Mode Stage Two. Mixed Mode Stage One (see Figure 8-1) combines the legacy NEs with the iVPUd. Since the HA-DAP has not yet been added to the network, the iVPUd or iVPUdi is routed through the MPS. If the service provider configuration still includes a legacy Base Site Controller (BSC), then an iVPUd is installed. Interconnect traffic is sent to the BSC and the Dispatch traffic passes through the iVPUd. In Mixed Mode Stage Two (see Figure 8-2 on page 8-8), the HA-DAP, the iVPUdi, and the iDEN Vocoder Processing Unit for Interconnect (iVPUi) are now available in the NGD architecture. Before the HA-DAP is integrated into the network, all EBTS must be associated with an IVPUd or IVPUdi.
Figure 8-1
Figure 8-2
In the final NGD implementation (see Figure 8-3), all legacy DAPs are replaced by HA-DAPs, and all EBTSs communicate through the iVPUd or the iVPUdi. The MPS, APD, and iDAC are decommissioned from the iDEN network according to the service providers deployment schedule. See Table 81 Mixed Mode and Final NGD Architecture NEs on page 8-9 for a comparison of what Dispatch NEs are supported in each phase of the NGD implementation. Figure 8-3 Next Generation Dispatch Network
SR 16.0 05-Oct-07
Table 8-1 shows a comparison of what Dispatch NEs are supported in each phase of the NGD implementation . Table 8-1 Mixed Mode and Final NGD Architecture NEs NEs Mixed Mode Stage One Mixed Mode Stage Two
MPS Legacy DAP HA-DAP APD iDAC iSG HA-iHLR iVPUd iVPUdi iVPUi Fully supported Fully supported Fully supported Fully supported Fully supported Fully supported Fully supported Fully supported Not supported Not supported Fully supported Fully supported Fully supported Fully supported Fully supported Fully supported Fully supported Fully supported Fully supported Fully supported
Final NGD
Not supported Not supported Fully supported Not supported Not supported Fully supported Fully supported Fully supported Fully supported Fully supported
References
The following publications may be required to supplement the information contained in this chapter.
EBTS Backhaul: Frame Relay DSO Requirements (WP2002-025) DAP User Guide (6881001Y49) SR13.4 DAP Hardware Installation Manual (68P81001Y50) Understanding iDEN System Fault Tolerance - Dispatch Subsystem (WP2002-022)
These documents are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com unless otherwise stated.
DAP Overview
The DAP coordinates and controls Dispatch and Packet Data (PD) communication services. The DAP is optimized to support rapid response time for services such as group calls, private calls, call alerts, emergency calls and Packet Data networking. The DAP provides the following functionality in the iDEN network:
Overall coordination and control of Dispatch and Packet Data functions. Tracks and maintains Dispatch and Packet Data mobility. Provides the Visitor Location Register (VLR) for subscriber information. Provides first-time registration for Dispatch and Interconnect subscribers. Tracks Subscriber Unit (SU) deactivation. Sets up Dispatch group calls. Collects alarms and performance statistics.
The R10K DAP supports up to a maximum of 2,000 simultaneous Dispatch voice calls and 150,000 Dispatch subscribers. The R12K 4-way DAP supports up to 330,000 Dispatch subscribers. The R12K 2-way DAP supports up to 2,000 simultaneous Dispatch voice calls and 180,000 Dispatch subscribers.
SR 16.0 05-Oct-07
Urban Identifier (Urban ID) Fleet Identifier (Fleet ID) Talk Group Identifier (Talk Group ID) Multiple Simultaneous Talk Group ID (MST ID) Fleet Member Identifier (Fleet Member ID) Site specific parameters
Urban ID
The Urban ID parameter defines the home region for a particular SU. This parameter allows an SU to roam outside the home region and maintain a unique ID that is used to confirm service, and permit inter-region and horizontal Dispatch (cross fleet) calling. There can be approximately 11 million Urban IDs in the iDEN network. The Dispatch-Home Location Register (D-HLR) and Dispatch-Visitor Location Register (D-VLR) maintain the Urban ID information in its databases.
Fleet ID
The Fleet ID parameter is an identification number assigned to major corporate or municipal subscribers by the service provider. A fleet is comprised of different groups, users, or members, and those groups are defined and managed by the HA-iHLR and D-VLR. The Fleet ID parameter is the largest functional unit of a Dispatch call.
Fleet Member ID
The Fleet Member ID parameter is a number assigned by the service provider to uniquely identify an SU as a member of a particular fleet. This parameter is used to identify the originator or target SU during a Private Dispatch call. The Fleet Member ID is also used to show which SU is transmitting in the PTT-ID service. The Fleet Member ID does not need to be contiguous, so a fleet member identifier number can be removed from a fleet without having to renumber the fleet.
Talk Group ID
The Talk Group ID parameter is a number given by the service provider to divide fleets into groups or logical units. The type of fleet, and the number of groups in a fleet vary with the requirements of each fleet. Each group in a fleet has a dispatcher (user 0) that is the control point for all Dispatch communication. Members of the fleet are assigned to different groups based on task, function, organization, or another method. A SU may be part of more than one
talk group, and can change the default talk group, as long as the change is allowed in provisioning by the service provider. The Dispatch service provider creates specific limitations for each member of the fleet so a user only hears the conversations and announcements related to members of that same group.
The Multiple Simultaneous Talk (MST) Group ID is an optional feature associated with advanced feature software. An AFU SU is required to support the MST optional feature. The MST feature allows a SU to belong to up to four groups in the same fleet. This optional feature allows a user to monitor and participate with other groups. The user operates a mode switch to change to the associated group from the current selected group. The original group is called the selected group, and the other 3 potential groups are called associated groups. There is only one selected group allowed per mode switch setting. And the capability to select different associated talk-groups for all modes can be toggled on or off.
In the iDEN network, each EBTS has unique site specific identifiers to coordinate activities associated with the Radio Link. Site specific parameters are transmitted on the Broadcast Control Channel (BCCH) by each cell to the mobile subscribers on that cell. The site specific parameters are read and utilized by each SU as link control procedures are run.
Hardware Description
The following platforms are available in the DAP subsystem:
The R10K DAP is housed in two cabinets that support 24 V.35 I/O ports to the MPS (see Figure 8-4 on page 8-14). The R10K DAP utilizes two Central Processing Units (CPUs),1 Giga-Byte (GB) Random Access Memory (RAM), and two 4 GB hard drives. The R12K 4-way DAP is housed in one cabinet that supports 32 T1 or E1 connections to the MPS (see Figure 8-5 on page 8-15). The R12K 4-way DAP uses two CPUs, two GB RAM, and two 18 GB hard drives. The R12K 2-way DAP is housed in one cabinet that supports 32 T1 or E1 connections to the MPS. The R12K 2-way DAP uses two 300 MHx CPUs, one GB RAM, and two 4 GB hard drives. The DAP is comprised of the following components:
SR 16.0 05-Oct-07
Mass Storage Devices (MSD) Router Controller Card (ROC) Standard System Controller (SSC)
CPU
The two CPUs provide the processing engine and memory for the DAP. The CPUs provide the logic circuits to execute Dispatch processing code and access the databases for service and location information.
IOC
The IOC cards provide for the connection of external equipment. In most cases, V.35 may be used to handle FR communication (Dispatch call and Packet Data processing) with the MPS, and operations and maintenance communications. Some installations may use another card to provide a single port to an ethernet LAN transceiver for operations and maintenance.
MSD
The MSD stores and maintains the operating software and databases. Tape backup and CD-ROM devices are also available. All of these devices are Small Computer System Interface (SCSI). The databases contain the identification, location and authentication information for Dispatch enabled mobile subscribers.
ROC
The ROCs manage the routing of control signaling and operations and maintenance information between the DAP processor and the rest of the system. There are two cards for redundancy and fault tolerance.
SSC
There are two SSCs in the DAP cabinet for fault tolerant redundancy. The SSC cards provide the ServerNet communications routing for the following:
Configuration and maintenance Host Bus Adapters (HBAs) I/O controllers I/O expansion functions for the system Internal control and maintenance of the system SCSI disk drives
8-14
ROC2201 CPU2202 JUMPER JUMPER CPU2205 ROC2206 ROC2207 CPU2208 CPU2209 CPU2210 CPU2211 ROC2212 SSC4313 IOC3.314 IOC4.414 IOC3.315 IOC4.415 IOC3.316 IOC4.416 IOC3.317 IOC4.417 IOC3.318 IOC4.418 SSC4319 SSC3313 IOC1.314 IOC3.314 IOC1.315 IOC3.315 IOC1.316 IOC3.316 IOC1.317 IOC3.317 IOC1.318 IOC3.318 SSC3319 CPU Modules
Blower Modules
Power Modules
Figure 8-4
I/O Modules
Functional Description
6871000P02-A
Storage/ Expansion Module Storage/ Expansion Module
SR 16.0
05-Oct-07
Figure 8-5
Power Modules
Blower Modules
I/O Modules
ROC2206 ROC2207
CPU2202
Blank
Blank
Blank
CPU Modules
JUMPER
Blank ROC2212
ROC2201
CPU2205
JUMPER
Software Description
This section contains the following software related information:
DAP Feature
The iGW-SDGC Enhancement feature impacts the DAP.
iGW-SDGC Enhancement
The iGW now supports SDGC between a 3G network and the iDEN network. When an SDGC call is initiated, the 3G network forwards a SIP invite request. The VSGW forwards the new Proxy Page Request Forward message to the DAP and reserves transcoder resources. When the iDEN SU responds to the page, the DAP sends an SDGC Page Response Forward message to the VSGW. The VSGW sends an OK response to the 3G network endpoint signaling that the Dispatch call is connected. For additional information about the feature, see iGW-SDGC Enhancement on page 8-26.
8
DAP Enhancements
The following performance and status enhancements were added for the DAP:
Business Change Control Channel (BCCB) 862 BCCB 1008 BCCB 1020 DAP Migration for NGO
BCCB 862
Previously, failed mobility attempts were tracked by the ACG/DLA ID zero (0) statistics. Even when the DAP incremented these statistics, they were not forwarded to the OMC-R because the statistic ACG/DLA ID 0 was considered invalid. So four existing statistics were redefined to count successful reconnect, renewal, and deactivation events. And three new statistics were added to count failed reconnect, renewal, and deactivation events. Failed registrations are already tracked/counted in the existing DAP_REG_REQ_IMEI_FAIL and DAP_REG_REQ_IMSI_FAIL statistics.
SR 16.0 05-Oct-07
See Table 8-2 for information on the new and revised statistics. Table 8-2 Statistics Changes for BCCB 862 Existing Statistic New Statistic Stat ID Level New Description
TOT_RRNC_REQ TOT_RRN_REQ TOT_R_REQ TOT_DEA_REQ N/A N/A N/A TOT_SUCC_RRNC_REQ TOT_SUCC_RRN_REQ TOT_SUCC_R_REQ TOT_SUCC_DEA_REQ 1088 1089 108A 108B ACG DLA DLA DLA DLA DAP DAP Successful reconnects (registration renewals with call) Successful registration renewals with no call Successful registration renewals handled by the serving DAP Successful deactivations Failed reconnects (registration renewals with call) Failed registration renewals with no call Failed deactivations
BCCB 1008
The BCCB 1008 enhancement increased the number of supported sites from 800 sites per Mobile Switching Office (MSO) to 2000. The HA-DAP supports 2000 sites.
BCCB 1020
The BCCB 1020 enhancement addresses a PCCH spiking problem caused by a high rate of Registration Renewals (RR). A RR is sent by the mobile to update the D-VLR when the mobile crosses a DLA boundary, or on a periodic basis once every 10 hours. For more information see BCCB 1020 on page 8-29 for more information.
With the introduction of NGO, Urbans with up to seven OMC-Rs are upgraded into Urbans with up to two Netra 440. To maintain DAP link state information for sites on an OMC-R basis, a migration procedure is performed.
Account and performance management Database management System control management Resource management Configuration and state management Availability management Call processing management
The Account Performance Management Task (APMT) retrieves data from shared memory and from other tasks. The APMT also creates billing records and sends statistics to the Common Agent (CA).
Database Management
The Database Management architecture consists of the Lifetime Registration Audit Task (LRAT), the MDG Backup Download Task (MBDT), the Visitor Location Register Task (VLRT), and the Surveillance Provisioning Management Task (SPMT). The LRAT performs database audits. The MBDT stores and updates Mobile Data Gateway (MDG) related subscriber information, and sends subscriber information to the MDGs. The VLRT manages the D-VLR database. The SPMT manages subscriber surveillance information, and handles surveillance requests from iSGs.
The System Control Management architecture consists of three interfaces, the System Administration Management Task (SAMT), the System Maintenance Terminal Task (SMTT), and the Control Database Trace Task (CDTT). The SAMT is the interface to request system startup/shut down, or system and database maintenance. The SMTT is the interface to retrieve or updated system configuration and state information. The CDTT is the interface to turn on/off traces and retrieve various database tables.
Resource Management
The Resource Management architecture consists of the MDG Resource Management Task (MRMT), the Packet Duplicator Management Task (P2MT), The iSG Resource Management Task (IRMT), and the DAP Resource Management Task (DRMT). The MRMT manages MDG status information. The P2MT manages APD and iDAC status information. The IRMT manages iSG status information. And the DRMT manages the status information of other DAPs.
SR 16.0 05-Oct-07
The Configuration and State Management architecture consists of four interfaces: the Configuration Management Task (CFMT), the GTT Data Manager Task (GDMT), the DAP Link Management Task (DLMT), and the Node State Management Task (NSMT). The CFMT manages the system configurations for other tasks. The GDMT receives and manages the Global Title Translation (GTT) data. The DLMT handles link state management for the DAP. The NSMT handles application, node, and load shedding state management for the DAP.
Availability Management
The Availability Management architecture consists of the DAP Utilization Resource Task (DURT). The DURT determines the availability of the DAP and sends the availability information to the ACGs.
The Call Processing Management architecture consists of the Call Processing Management Task (CPMT), the Mobility Management Task (MBMT), and the Registration Renewal Required Task (RRRT). The CPMT sets up Dispatch calls, and processes Dispatch and Packet Data calls. The MBMT tracks the location of the mobile subscriber, handles Dispatch and Packet Data registrations, SU authentication, and applies provisioning changes sent from the HA-iHLR. The RRRT sends registration renewal required messages to all or a subset of the Access Controller Gateway (ACGs).
Network Management
DAP operation and maintenance activities, and network interfaces are provided by the following:
iDEN and SMT Menus Event logs CA Common Controller Platform (CCP) Layer Mobile Application Part (MAP)
iDEN Menu
The iDEN Menu provides the interface to operate and maintain the DAP. The menu interface provides the following functionality:
System and configuration file maintenance Statistics and queries for D-VLR Alarm and billing file maintenance Hardware status and diagnostics
SMT Menu
The SMT Menu provides the user interface for the following functionality:
View and update alarm table parameters Configure service parameters, I/O parameters, performance parameters, trace parameters, and PD parameters. These parameters can be configured, but the updates are only temporary until the configuration file is updated. If the application is stop-started, the parameters default to the original values.
Update billing View state management information Update MDG Packet Data information
Event Logs
The OS maintains a system log file of events, including changes of state and failures of hardware CRUs. A single event entry in the event log can be misleading; however, a series of similar or related events recorded over time often identify the source of the problem. To use the system event log for diagnosing problems, review log entries that were recorded over an interval of time, such as several hours or even longer. It is also possible to review the history of a resource or CRU using the information from system messages and hardware message logs.
CA
The CA provides a set of common network management facilities for the NEs. The CA is the interface between the OMC-R and the various NEs (except for legacy BSC and EBTS), and is OS independent. Each NE can selectively use the CA services based on their needs. The CA implements the following network management facilities:
Bootstrap Protocol (BOOTP) client Simple Network Time Protocol (SNTP) client File transfer protocol, statistics collection Configuration management Alarm/event reporting OMC-R simulator
SR 16.0 05-Oct-07
CCP Layer
The CCP Layer is the interface between the box hardware or OS, and the application. The CCP implements a common functionality for multiple applications, and allows the application to be platform-independent.
MAP
The MAP communicates between VLRs and HLRs, and provides services such as; authentication services, location management, and provisioning. The MAP application is implemented in the Mobile Application Part for VLR Task (MAVT) on the DAP.
References
The following publications may be required to supplement the information contained in this chapter.
HA Dispatch Application Processor (HA-DAP) User Manual (68P81001Y83) HA-DAP Hardware Installation Manual (68P81001Y84)
These documents are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com unless otherwise stated.
SR 16.0 05-Oct-07
HA-DAP Overview
The HA-DAP is the next generation DAP for the iDEN network, and is responsible for the overall coordination and control of Dispatch and Packet Data services. The HA-DAP has two nodes that work in an active/standby configuration to provide highly available Dispatch services. With the HADAP, operation costs associated with Dispatch services are reduced, and the availability of Dispatch services has increased. The HA-DAP is optimized to support rapid response time for all Dispatch call types and Packet Data networking. The HA-DAP provides the following functionality in the iDEN network:
Assigns signaling and routing paths for Dispatch and Packet Data. Collects and maintains Alarms and performance statistics. Supports first-time registration for all Dispatch and Interconnect subscribers. Tracks Dispatch subscriber mobility through the last known Dispatch Location Area (DSA). Controls the activity of a subscriber unit (SU) through the D-VLR. Provides a LMT interface for subscriber provisioning and services.
A single HA-DAP supports a maximum of 20,000 simultaneous Dispatch calls with up to 5,000 of those calls designated as Inter-Urban Dispatch calls. The HA-DAP supports the following Dispatch functionality:
8,000,000 Subscribers in an Urban 1,031,851 multi-service subscribers (Home and Roaming) 275,000 fleets 1,031,851 talk groups 1,031,851 modes 150,000 Selective Dynamic Group (SDG) records
Hardware Description
The HA-DAP is comprised of the following components (see Figure 8-6): Figure 8-6 HA-DAP Cabinet- Front and Rear View
RX7620 Server
RX7620 Server
RX7620 Server
RX7620 Server
Front Front
Rear Rear
SR 16.0 05-Oct-07
RX7620 Server
The RX7620 server includes up to 16 IA64 processors and a maximum of two cell boards. See Figure 8-7 and Figure 8-8 RX7620 Server-Back Panel on page 8-26 for more information. The cell board contains up to four Central Processor Units (CPUs), PCI-X Input/Output (I/O) backplane, the Cache Coherency Controller, internal disk drives, the memory controllers, and the memory Dual In-line Memory Modules (DIMMs). The PCI-X backplane is the primary I/O interface for the RX7620, is a single board that acts like two independent partitions, and provides 16 64-bit hot-plug slots. The PCI card section includes redundant Small Computer System Interface (SCSI) controller cards, and redundant dual port ethernet cards. The Platform Dependant Hardware (PDH) Riser board is a daughter card for the cell board. The PDH obtains cell board configuration information from cell board signals and from the cell's LPM. It contains the memory interface microcircuit, processor-dependent hardware, flash memory, and a manageability micro controller, called the Platform Dependant Hardware Controller (PDHC).
Figure 8-7
DVD+RW Drive
DC Power Supply
Figure 8-8
PCI Backplane
Smart Fans
DC Power Input
Software Description
This section contains the following software related information:
HA-DAP Feature
The iGW-SDGC Enhancement feature impacts the HA-DAP.
iGW-SDGC Enhancement
Previously, the iDEN Gateway (iGW) supported private Dispatch calls between two mobile stations. Now, the iGW supports SDGC between a 3G network and the iDEN network. When an SDGC call is initiated, the 3G network forwards a Session Internet Protocol (SIP) invite request. The Voice and Signaling Gateway (VSGW) then sends a Session Progress response to the Invite, and forwards the new Proxy Page Request Forward message to the
SR 16.0 05-Oct-07
HA-DAP. Next, the VSGW reserves transcoder resources to convert 3G voice encoding to Vector Sum Excited Linear Prediction (VSELP) or Advanced Multi-Band Excitation Enhanced plus Two (AMBE++), and to convert Real Time Transport Protocol (RTP) to the iDEN version of RTP (iRTP). When the iDEN SU responds to the page, the HA-DAP sends an SDGC Page Response Forward message to the VSGW. The VSGW sends an OK response to the 3G network endpoint signaling that the Dispatch call is connected. The following new DAP statistics were added to track iGW SDGC feature functionality:
domain.
SDGC_PMC_CALLBK_ORIG - Number of callback requests received by the PMC-DAP from its own Urban. SDGC_PMC_SC_CALLBK_ORIG - Number of callback requests received by the PMC-DAP from other urbans within the iDEN domain. SDGC_PMC_VSGW_CALLBK_ORIG - Number of callback requests received by the PMC-DAP that were originated by the SIP domain. SDGC_PMC_SUCC - Number of successful SDGCs that terminated due to hang
SDGC_PMC_INTRA_URBAN_SUCC - Number of successful S2D calls that contained participants from only the PMC Urban. SDGC_PMC_FAIL_F1
- Number of SDGCs that failed because not even a single target in the iDEN domain responded to pages.
SDGC_PMC_FAIL_F2 - Number of SDGCs that failed for reasons not pegged by any other statistic. SDGC_PMC_FAIL_F3
- Number S2D SDGCs that failed because the originator opted out of the call. - Number S2D SDGCs that failed because the originator validation check failed. - Number S2D SDGCs that failed because no targets are available in the iDEN domain. CSC timeout.
SDGC_PMC_FAIL_F4
SDGC_PMC_FAIL_F5
- Number of active participants per SDGC. - Number of serving DAPs per SDGC within the
SDGC_PMC_SRVNG_DAP
PMC-DAPs Urban.
The following existing DAP statistics were changed to track iGW SDGC feature functionality:
NUM_DAP_PRB_RECV TOT_SDGC_TIMEOUTS SDGC_UPDATES SDGC_UPD_REQ_FWD_SENT SDGC_UPD_REQ_FWD_RCVD SDGC_CONT_FWD_SENT SDGC_CONT_FWD_RECV SDGC_PAR_STATUS_UPD_FWD_SENT SDGC_PAR_STATUS_UPD_FWD_RECV SDGC_INDV_STATUS_INFO_FWD_SENT SDGC_INDV_STATUS_INFO_FWD_RECV SDGC_URBAN_STATUS_INFO_FWD_SENT SDGC_URBAN_STATUS_INFO_FWD_RECV SDGC_STOP_FWD_RECV SDGC_PAGE_FWD_SENT SDGC_LOAD_SHED NUM_SDGC_REC_RLS
Alarm on LMT Login/Logout Remote Access Security Root Access Control LMT Master iVPU BCCB 1020
The Alarm on LMT Login/Logout enhancement gives the ability to track users as they access NGD NEs. When a user logs in/out of the Local Maintenance Terminal (LMT) on the HA-DAP, HA-iHLR, or iVPU, the NE sends an ActionStatus trap to the OMC-R to record in the log. The trap includes the log in/out action, the userid, date and timestamp.
Previously, Dispatch NEs were accessed from the LMT using insecure applications such as HTTP, Telnet and FTP. For enhancement R111, the following secure protocols are available for use: HTTPS, Secure Shell and Secure FTP.
SR 16.0 05-Oct-07
The Root Access Control enhancement gives the service provider control over NE root (super user) access. If root access is disabled, and a user logs in as super user, LMT access is denied. The administrative LMT user may enable or disable root access on a per node basis, but there is no command or GUI option to administer root access across multiple NEs. At the Root Access Management screen in the LMT, the following three menu items were added:
View Super User Access Control Status Enable Super User Access Disable Super User Access
Any LMT user can view information on the Enable Super User Access screen, but only Administrative users can enable or disable super user access. Note Root access is disabled by default. Prior to maintenance activities, ensure that the root access is enabled if needed.
Every iVPU has a separate user account file with userids, and passwords. Administrative LMT users can create and maintain user accounts on any iVPU. When a iVPU is selected as the LMT master for the Urban, the master iVPU maintains the user accounts file and the IP addresses for all the iVPUs in the Urban. The user accounts are synchronized when the tx_us_file command transmits the user account file to a iVPU. If the tx_us_file command is issued at the LMT master iVPU, the master user account file of the LMT master iVPU is transferred to every iVPU in the IP address list.
BCCB 1020
Prior to the BCCB 1020 enhancement, if an active mobile subscriber did not cross a DLA boundary for five days, then the mobile subscriber entry was removed from the D-VLR. The BCCB 1020 enhancement added a Lifetime Audit Registration Renewal Required (LA RRR) process to solve the PCCH spiking issue. The LA RRR process allows the mobile to send an RR by sending it an individually addressed Registration Renewal Required (RRR) message. The following menu items were added to the Performance Management Submenu:
View current Lifetime Audit RRR Statistics View Previous Lifetime Audit RRR Statistics
The following D-VLR statistics were moved from the Database Maintenance screen, to System Maintenance -> Performance Management:
View D-VLR Statistics View Subscriber/Fleet Totals View System Boundary Totals View Replication Memory Usage Statistics View Replication Internal Statistics
See Table 8-3 for a list of the parameters added for this enhancement. Table 8-3 Parameters Added for BCCB 1020 Parameter Name Default Range Description
LIFETIME_AUDIT_RRR_RATE_FIRST 20 1-25 The number of subscriber units per second processed by the DAP during the first pass in the LA RRR process. The number of SUs per second which the DAP processes during the second pass in the LA RRR process. The number of SUs per second which the DAP processes during the third pass in the LA RRR process. The minimum time between retry passes for the LA RRR process. A zero value disables the LA RRR process.
LIFETIME_AUDIT_RRR_RATE_SECOND 10
1-25
LIFETIME_AUDIT_RRR_RATE_THIRD
1-25
LIFETIME_AUDIT_RRR_RETRY_TIMER
60
0-120
See Table 8-4 for information on the statistics added for BCCB 1020. Table 8-4 Statistics Added for BCCB 1020 Statistic Name Description
NUM_AUD_RRR_SENT NUM_AUD_RRR_MS Number of RRR messages sent in the last statistics reporting period as a result of the LA RRR process. Number of mobile stations that were sent RRR messages in the last statistics reporting period as a result of the LA RRR process.
NUM_AUD_RRR_MS_OVERFLW Number of mobile stations discarded in the last statistics reporting period of the LA RRR process and were not sent RRR messages.
BCCB 1020 provisioning changes such as; SDGC service, encrypted dispatch flag, individual dispatch roaming, and SCCH seed allow the SU to perform dispatch registration by rejecting the next RR. The following BCCB 1020 provisioning conditions apply:
If T3206=10 and LA RRR is disabled or non-existent, then provisioning changes are applied within T3206 (10) hours. If T3206=0 and LA RRR is enabled and a provisioning change occurs right before the daily audit, then the change is applied within 1-6 hours. If T3206=0 and LA RRR is enabled and a provisioning change occurs right after the daily audit, then the change could take up to 29 hours to be applied. Provisioning changes are also applied with a power cycle or DLA change. If T3206=10 and LA RRR is disabled or non-existent then most subscribers are moved to the new DAP within T3206 (10) hours. If T3206=0 and LA RRR is enabled, home DAP Reassignments are scheduled prior to the 1 AM daily audit. Otherwise, there is an additional 24 hour delay for subscribers to be moved to the new DAP.
SR 16.0 05-Oct-07
Database Maintenance
The HA-DAP uses the following activities for database operations and maintenance:
Database Query
The LMT provides the user with an option to view D-VLR statistics, query subscriber units stored in the D-VLR, and check status of the Resource Allocation Group (RAG).
Database Replication
The HA-DAP replicates database information between the two HA-DAP nodes using the Carrier Signaling Intranet and the Private HA Intranet. Database replication is automatic and starts up as part of the HA-DAP application. The LMT menu item, Activate Replication, is used to manually start a database replication if a node has been down for an extended period of time and the database needs to be re-synchronized manually. Replication events are queued and are sent periodically to the other node in bursts. During replication, the following databases are duplicated:
D-VLR MDG RAG information Usability information GTT information LMT user information DAP user configured information
The queue holds events during communication failures, and the backlog of changes are automatically synchronized when the connection is re-established. Logs hold about 30 minutes of changes in normal load. If a conflict condition is detected when data replication starts on the target node, then replication performs a conflict resolution to improve the condition. When there is a conflict, the latest version of the object is maintained. If the number of conflicts for a specific region reaches the maximum threshold, then replication performs a synchronization activity for that particular region.
HA Services Layer
The HA Services layer provides the interface for the two HA-DAP nodes to exchange state information, and to manage switchovers between the active and standby node. The HA Services layer makes use of both the Carrier Signaling Intranet, and the Private HA Intranet to increase communication between the two HA-DAP nodes. This interface uses a Private IP address that is configured by Open Shortest Path First (OSPF) to work seamlessly with both the Carrier Signaling Intranet and the Private HA-DAP Intranet.
Network Management
The HA-DAP operation and maintenance activities are provided by the following:
LMT
The LMT is the web-based maintenance terminal interface that operates and maintains the HA-DAP. The LMT replaces the previous DAP text-based interface, and the functionality from the iDEN and SMT Menus has been combined. The types of maintenance operations that are available have increased dramatically. User accounts are now managed through the LMT to support multiple DAP maintenance and provisioning users. And passwords are no longer needed on individual menu items. The LMT contains the following functionality:
Activity Log Management Billing Database Maintenance and Database Query Link Maintenance Load Shedding Parameter Maintenance Software Patch Maintenance System Maintenance (includes configuration file management, and IP network management) User Management
A text-based version of the LMT is also available if needed through Lynx. Lynx is viewable through a web browser and utilizes a basic set of HTML elements. Since Lynx is HTML-based, the interface is viewed through ssh or a local console.
SR 16.0 05-Oct-07
ELMT
The ELMT is used only when the LMT is inaccessible due to network outage or system failure. The ELMT has limited functionality and allows the user to start-stop the HA-DAP, shut down and reboot the HA-DAP, and start-stop the web server.
The HA-DAP LMT allows the user to backup activity log files to an archive device. The user can also view system messages through the LMT.
References
The following publications may be required to supplement the information contained in this chapter.
iVPUd Management of BPP2 Resources (wp06003) SR14.0 IVPUdi BPP2 Resource Management (wp06006)
These documents are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com unless otherwise stated.
iVPU Overview
The iVPU supports Dispatch, Packet Data, and Interconnect traffic, and internally configures resources on the BPP2 to perform the audio routing and duplication. The iVPU supports high availability with the active/standby SPROC/ISB cards. In the Mixed Mode architecture, the ISB uses the OSPF routing protocol for high availability. In the NGD architecture, static routes are used instead of OSPF to simplify the configuration requirements. The iVPU provides the following functionality in the iDEN network:
Performs audio routing and duplication for the NGD. Supports the Horizontal Function with the BPP2 card. Interfaces with the HA-DAP and MDGs to provide Dispatch and Packet Data service. Manages control links from the HA-DAPs and MDGs (northbound links). Provides pass-through connectivity between the HA-DAP - EBTS, the MDG - EBTS, and the MPS.
iVPUd
The iVPUd provides Dispatch audio routing and duplication. The iVPUd is configured to interface with the HA-DAP and MDGs to provide Dispatch and Packet Data control. The iVPUd also provides connectivity to the MPS network for the legacy FR network. The IVPUd operates in a Split Mode Backhaul configuration, and each site has an average of 17 carriers. In the NGD architecture, there are 3 control links (C0, C1, C13a) between the ACG to the iVPU interface at the physical layer. Control messages verify the connectivity between the site and the iVPUd and are sent over the C1 link. All NGD Dispatch traffic is sent over the C1 link and terminates on the iVPUd.
SR 16.0 05-Oct-07
iVPUdi
The iVPUdi provides Dispatch audio routing and duplication, Interconnect vocoding, and Packet Data services. The iVPUdi interfaces with the HA-DAP and MDG, and also provides connectivity to the MPS for the legacy FR network. The iVPUdi operates in either a 100% Split Mode Backhaul, or 100% All Frame Relay (AFR) Backhaul configuration. In Split Mode Backhaul, the iVPUdi supports up to a maximum of 192 sites. In AFR Backhaul mode, the iVPUdi supports up to a maximum number of 252 sites, and 3200 sites in a single Urban Area. For additional information on how iVPUdi provides vocoding for Interconnect service, see Chapter 7, Interconnect Subsystem Details.
iVPUi
The iVPUi supports Interconnect traffic only. See Chapter 7, Interconnect Subsystem Details for more information on iVPUi.
Hardware Description
The iVPU is comprised of the following platform configurations:
iVPUd iVPUdi
iVPUd
There are two iVPUd units to a cabinet and each unit supports up to a maximum of 500 sites (see Figure 8-9 iVPU Front and Rear Cabinet on page 8-37). The BPP2 BCP is composed of a host processor and 18 DSPs. A subset of these 18 DSPs are assigned Frame Relay to Internet Protocol (FRIP), or the Horizontal Function. The FRIP function provides the following two capabilities: FRIP interworking function to route the various traffic types to and from the sites, and audio routing and duplication function for intraUrban dispatch calls. FRIP Each iVPUd is configured with up to 16 groups of FRIP DSPs, with up to five FRIP DSPs per subnet. All FRIPs in a subnet are allocated to a single BPP2, and are only configured on an as needed basis. The iVPUd configures enough DSPs to equip the FR and MPS connections configured in the database file. And the iVPU ensures that a FRIP DSP is filled to capacity before a new FRIP DSP is configured. The iVPU uses a round-robin criteria to determine which BPP2 card is allocated with the next FRIP DSP. The iVPU distributes the FRIP as well as northbound NE links.Each FRIP is configured with up to 10 FR connections and the FR connections are not mixed with MPS connections on a single DSP. The FRIPs that are assigned to directly handle site traffic are called EBTS Frame Relay to Internet Protocol (eFRIPs), and FRIPs that handle MPS connection traffic are called MPS Frame Relay to Internet Protocol (mFRIPs). Digital Services Level 0 (DS0s) comprise the FR connections assigned to the FRIP, and the FRIP is configured for up to 52 DS0s. Horizontal Network Up to 800 Horizontal Network (HN) links are configured on a single DSP. HN links reside on a subnet that is separate from the 16 subnets assigned to the FRIP DSPs.
SR 16.0 05-Oct-07
Figure 8-9
Front
Rear
Main Filler Panel Breaker Panel DC Power Distribution Power Supply Shelf VPU Cage Circuit Breaker Label Upper VPU Cage Front and Rear (See Note 1)
Fan Modules
Circuit Breaker Label Lower VPU Cage Front and Rear (See Note 1)
Fan Modules
Notes: 1. System configuration determines if the VPU cage is populated with cards.
iVPUdi
The iVPUdi shelf contains up to 15 BPP2 cards, with each BPP2 configured for 15 VPF DSPs, and three I/O DSPs (see Figure 8-9 iVPU Front and Rear Cabinet on page 8-37). A BPP2 card hosts a maximum of three HA-DAP to iVPU links, and these links are distributed across the active BPP2 cards on the iVPUdi. A BPP2 card also hosts a maximum of 18 MDG to iVPU links, and these links are evenly distributed across the active BPP2s in the iVPUdi.
The BPP2 board is comprised of a BCP and 18 DSPs where a subset are assigned as VPFs, IOPs, FRIPs or Horizontal Function. If the BPP2 card is assigned a Horizontal Function, the board is limited to one FRIP subnet. A BPP2 card without the Horizontal Function provides a maximum of two FRIP subnets. A BPP2 card is limited to either six FRIP DSPs, or three FRIP DSPs plus one Horizontal DSP. The I/O DSPs are allocated for site backhaul FR channels, MPS FR channels, IOP functions, or Horizontal Functions. FRIP Each iVPUdi is configured with up to 15 subnets of FRIP DSPs, and up to three FRIP DSPs per subnet. All FRIPs in the subnet are allocated to a single BPP2, and are configured only as needed. Each FRIP is configured with up to 10 FR connections, and these connections are not mixed with other MPS connections. Like the iVPUd, the iVPUdi handles site traffic with eFRIPs, and MPS connection traffic with mFRIPs. The eFRIP are assigned to either Split Backhaul or AFR sites, and eFRIP and mFRIP resources are configured on up to 52 DS0s. When initialized in the iDEN network, the iVPUdi maximizes the efficiency of eFRIPs and mFRIPs that are allocated on the iVPUdi shelf. A FRIP DSP must be filled to capacity before a new FRIP DSP is configured. Horizontal Network HN is available on the iVPUdi, and a Horizontal capable iVPUdi assigns HN to one DSP on the BPP2 for Inter-Urban dispatch calls. The HN resides on its own subnet, and is independent of the 15 FRIP subnets.
SR 16.0 05-Oct-07
Software Description
This section contains the following software related information:
iVPU Feature
The following feature impacts the iVPU.
With an iVPUd/di configuration, Dispatch voice traffic is carried as VoIP. Each iVPUd/di assigns an IP address to every DSP which consumes a large amount of IP addresses. The IVPU Footprint Reduction Feature (IPFR) significantly reduces the number of IP addresses that are needed. When iVPUs have the IPFR feature activated, each ISB is assigned an external IP address, and the SPROC and BPP2 are assigned an IP address. For a horizontal iVPUd/di, an external horizontal IP address is also assigned. So with the IPFR enabled, the following enhancements are available:
Each iVPUd/di shelf utilizes 22 IP addresses (without a horizontal iVPU) Each horizontal iVPUd/di utilizes 23 IP addresses
8
iVPU Enhancements
The following enhancements impact the iVPU:
NGDE: Alarm on LMT Login/Logout iVPU SPROC/ISB LED Indicators Root Access Control Enhanced Frame Relay Channel Statistics Enhanced LMT Capabilities LMT Master iVPU Remotely Manage iVPU VRRP Support on iVPU
This enhancement gives the ability to track users as they log in and log out of the LMT. When a user logs in/out of the LMT on the HA-DAP, HA-iHLR, or iVPU, the NE sends an ActionStatus trap to the OMC-R to record in the log. The trap includes the log in/out action, the userid, date and timestamp.
The IVPU SPROC/ISB LEDs are reprogrammed to show SPROC and ISB status. Previously, The SPROC and ISB were a combined FRU with one set of LEDs that only indicated ISB status. New LED states were added to show SPROC status in addition to ISB status. This enhancement applies to iVPUd, and also iVPUdi.
This enhancement gives the LMT administrator control over NE root (super user) access. If root access is disabled, and a user logs in as super user, LMT access is denied, although any user level may display root access status. The LMT administrator may enable or disable root access on a per node basis, but there is no command or GUI option to administer root access across multiple NEs. Root access is disabled by default. Prior to maintenance activities, ensure that the root access is enabled if needed. There are three access control commands added for this enhancement:
enable_su_accessAllows root access for the iVPU. disable_su_accessDisables root access for the iVPU. disp_su_accessDisplays the current root access status for the iVPU.
This enhancement adds new FR channel statistics for the iVPU. Statistics are typically reported to the OMC-R at the end of the statistics interval. Now, channel statistics are displayed real-time at the LMT. FR channels between the iVPU and the EBTS support the following statistics:
Maximum bytes transmitted and received per second Total bytes transmitted and received per measurement interval Maximum Forward Queue Depth per measurement interval
FR channels between the iVPU and the MPS supports the following statistics:
Maximum bytes transmitted and received per second Total bytes transmitted and received per measurement interval Maximum Forward Queue Depth per measurement interval Total packets lost, including receive errors, or queue overflow events
With this enhancement, the LMT user can view frame relay channel utilization over a 30 minute statistics interval. This capability only applies to iVPUd and iVPUdi. Also the MMI prompt and iVPU log files are updated to include the iVPU identifier, SPROC identifier, SPROC state, userid, and timestamp.
SR 16.0 05-Oct-07
Every iVPU has a separate user account file with userids, passwords, and associated levels. The LMT administrator can create and maintain user accounts on any iVPU. When a iVPU is selected as the LMT master for the Urban, the master iVPU then contains the user accounts file, and the IP addresses for all the iVPUs in the Urban. The disp_element lmt_account_master command is used to determine whether or not this iVPU is the LMT account master. The user accounts are synchronized when the tx_us_file command transmits the user account file to another iVPU. If the tx_us_file command is issued at the LMT master iVPU, the master user account file of the LMT master iVPU is transferred to every iVPU in the IP address list.
When iVPU was introduced into the iDEN network, technicians used telnet to perform operations and maintenance activities. OMC-R users may launch a telnet session to remotely manage the operations of the iVPU.
The Virtual Router Redundancy Protocol (VRRP) is available to support static routing instead of the OSPF. This change reduces and simplifies configuration requirements, and supports high availability and failover on the iVPU. When the SPROC/iSG Field Replaceable Units (FRUs) in slots 2 and 18 are configured for VRRP, a priority value is associated with each FRU. The VRRP router selects a master or active router for the subnet. If the ISB in slot 2 has a higher priority, then the ISB becomes the master router for the subnet. If both the active and standby routers have the same assigned priority, the router with the highest IP address is elected the active router, and all LAN traffic flows through that active router. Each ISB has a unique IP address. The iVPU is assigned a primary IP address for use as a default gateway by the bearer cards. The Catalyst switches must be configured by the Hot Standby Router Protocol (HSRP) for high availability and failover. The HSRP introduces the idea of a virtual router, which is a group of physical routers on the same LAN. Only one router in the group is designated as the active router, and forwards traffic out of the LAN. The iVPU ports share the same HSRP IP address. The VRRP router (ISB-2) then sends a Address Resolution Protocol (ARP) to the Catalyst switch to associate the Media Access Control (MAC) address with the primary iVPU address. Since the ISB-2 MAC address is associated with the primary iVPU IP address, all traffic is routed to the ISB-2. In normal operation, only the master or active VRRP router receives traffic. And all VRRP routers must reside in the same subnet.
Network Management
The iVPUs operation and maintenance activities, and network protocols are maintained by the following:
The following MMI commands create and manage user accounts on any iVPU:
mod_user_type <user_type> <user_name> Changes the type (level) for an existing user account. reset_password <user_name> tx_ua_file [iVPU IP address] Resets the password to a default value. Transmits the user account file to the specified iVPU, or all iVPUs (if executed by the master iVPU).
SR 16.0 05-Oct-07
VRRP Protocol
The VRRP supports high availability and failover on the iVPU. The VRRP is used for static routing and supports high availability for IP hosts in a LAN. The VRRP router selects a master or active router for the subnet. If the active and standby routers are assigned the same priority, then the router with the highest IP address is elected the active router, and all LAN traffic flows through that active router. The VRRP master sends a VRRP advertisement packet every second, indicating that the router is active. The VRRP standby router does not send a VRRP advertisement. Each VRRP advertisement packet contains the following critical fields:
Virtual Router ID (VRID)VRRP group identifier (1-255). Advertisement IntervalInterval between advertisement messages sent by the VRRP master: the default value is 1 second. PriorityPriority configured for the VRRP master (1-255).
The IP address, protocol, and Time to Live (TTL) field in the IP header are used to forward the message to another VRRP router in the same LAN. The VRRP master router uses its physical MAC address as the message source. VRRP messages are broadcast to all other LAN stations, and the VRRP stations interpret the message based on the destination IP address.
HSRP Protocol
The HSRP provides high availability for a pair of routers in a LAN. The HSRP introduces the idea of a virtual router, which utilizes a virtual IP address and a virtual MAC address. The HSRP supports Hello, Coup and Resign messages in the iDEN network. The HSRP periodically sends Hello messages to indicate router status or values. The standby router also sends and receives HSRP Hello messages, but does not forward traffic or respond to ARP message requests. The Hello message contains the following critical fields:
Hello TimerInterval between Hello messages sent by this router. The default value is 3 seconds. Holdover TimerLength of time this Hello message is valid. The default value is 10 seconds. PriorityThe priority configured for the sending router (1-255). HRSP Group IdentifierThe HSRP group ID for this HSRP router pair (1255). HSRP Virtual IP AddressVirtual IP address configured for this HSRP group.
The HSRP Group Identifier, HSRP Virtual IP Address, virtual MAC address, priority, and various timer values are all configured in the HSRP. The IP addresses and TTL field in the IP header are used to forward the message to another HSRP router in the same LAN. The ethernet MAC addresses vary based on the router that sends the information. The active router uses the virtual MAC address for the message. The standby router uses its physical MAC address for the message.
References
The following publications may be required to supplement the information contained in this chapter.
iHLR User Manual (68P81001Y52) SR13.4 iHLR Hardware Installation Manual (68P81001Y51)
These documents are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com unless otherwise stated.
SR 16.0 05-Oct-07
HA-iHLR Overview
The Highly Available iDEN Home Location Register (HA-iHLR) is responsible for dispatch authentication, and dispatch and packet data registration in the iDEN network. The HA-iHLR is comprised of two redundant iHLR nodes that provide increased reliability and an active/standby configuration. The active node processes a full load of all mobility, call processing, and provisioning operations, while the standby node is available when needed. The active node contains a standalone database that stores subscriber information. The subscriber information includes the types of Dispatch calls subscribers are provisioned for, fleet assignments, and talkgroup and individual subscriber identification numbers. The HA-iHLR supports the following functionality in the iDEN network:
Provides a stand-alone database that stores and maintains subscriber information for Dispatch and Packet Data. Receives and maintains its own set of Global Title Translation (GTT) data. Transfers partial databases to other HA-iHLRs when needed. Provides a LMT interface for subscriber provisioning and services. Offers feature licensing to view and backup feature data. Provides the Remote Management Console (RMC) and Service Resource Module (SRM) Command Line Interface (CLI) to monitor and control the HA-iHLR.
The HA-iHLR currently supports the following capacity in the iDEN network:
Hardware Description
The HA-iHLR is comprised of the following main components:
TS40 AlphaServer
The HA-iHLR cabinet contains two AlphaServer TS40 servers, an alarm panel, and disk array (see Figure 8-10 HA-iHLR Front and Rear Cabinet on page 8-47). The AlphaServer TS40 server can support up to four CPUs, whereas the TS20 server supports up to two CPUs. Each CPU module has 8 MB of secondary backup cache. Memory is installed into memory motherboards (MMB) on the system motherboard. The MMBs have either 4 Dual Inline Memory Module (DIMM) slots (TS20 server), or 8 DIMM slots (TS40 server). The PCI backplane has two 33 MHz PCI buses that support 64-bit PCI slots. The AlphaServer chassis houses a CD-ROM drive, a floppy disk drive, a optional DAT tape, and an optional disk storage cage. The Operator Control Panel (OCP) is located on the front of the server and consists of three buttons, two LEDs, and a 16-character alphanumeric display. The display indicates status during power up and testing. The power button turns the system AC power on and off. The GREEN LED lights when the power button is depressed. The reset button restarts the system and reinitializes the console firmware. And the amber halt LED lights when the halt button is depressed. The power supply provides power to system components. One power supply provides enough power for most configurations. A secondary power supply provides redundancy if the original power supply fails.
Alarm Panel
The alarm panel has status indicators for each active/standby node. If there is an alarm condition in either node, the alarm panel indicates which node needs to be checked.
Disk Arrays
There are two separate disk arrays on the AlphaServer, a primary and mirror array for each HA-iHLR node with 8 disks each. When a single disk fails in either the primary or mirror disk array, this does not cause a switchover. If two disks fail, then the node will switchover.
SR 16.0 05-Oct-07
1U Panel
DC Power Connections
TS40 #1 Front
TS40 #1 Rear
TS40 #2 Front
TS40 #2 Rear
Front
Rear
Software Description
This section contains the following software related information:
HA-iHLR Features
All of the new features for the HA-iHLR impact or extend LMT access or functionality. See the following list of features for the HA-iHLR:
Load Shedding Status Ethernet Port Status Health Check Ping & Traceroute Restricted Access Key iHLR Data Collection Data Collection Timer
This feature adds the Load Shedding information option to display the current CPU utilization load level. This option also indicates if any services are affected. Any user can utilize the Load Shedding information option.
This feature utilizes the Physical Port Status Check tool to test the status of the HA-iHLRs four ethernet ports. This tool is an option in the System Maintenance Menu, and any user can display the status information.
Health Check
This feature creates a new summary report for the existing HA-iHLR Health Check. The summary report also provides links to detailed reports for database replication conflicts, and HA-iHLR - DAP link status. In addition, the View Details button launches the Health Check Detailed Results report which includes what tests passed or failed. The HA-iHLR Health Check is an option is in the System Maintenance Menu, and can be run by any user.
This feature provides a connectivity check using the LMT. Ping provides a simple end-to-end IP connectivity check, and TraceRoute reveals any routers in the end-to-end route.
SR 16.0 05-Oct-07
This feature prevents unauthorized users from modifying the HA-iHLR configuration. An access key is required to update feature data, and this option is restricted to developer and engineer users.
This feature allows the LMT user to select files for data collection. The selected files are bundled for external debugging of field cases or defects.
This feature controls the minimum time interval between consecutive HAiHLR data collection events. The minimum time interval may range from 0 to 15 minutes.
8
NGDE: Alarm on LMT Login/Logout Remote Access Security Root Access Control LMT Master iVPU
This enhancement gives the LMT administrator the ability to track users as they log in and log out of the LMT. When a user logs in/out of the LMT on the HA-DAP, HA-iHLR, or iVPU, the NE sends an ActionStatus trap to the OMC-R to record in the log. The trap includes the log in/out action, the userid, date and timestamp.
Previously, Dispatch NEs were accessed from the LMT using insecure applications such as HTTP, Telnet and FTP. Now, the following secure protocols are available for use: HTTPS, Secure Shell and Secure FTP.
This enhancement gives the service provider control over NE root (super user) access. If root access is disabled, and a user logs in as super user, LMT access is denied. The LMT administrator may enable or disable root access on a per node basis, but there is no command or GUI option to administer root access across multiple NEs. Note Root access is disabled by default. Prior to maintenance activities, ensure that the root access is enabled if needed.
Every iVPU has a separate user account file with userids, passwords, and associated levels. And the LMT administrator can create and maintain user accounts on any iVPU. When a iVPU is selected as the LMT master for the Urban, the master iVPU then contains the user accounts file, and the IP addresses for all the iVPUs in the Urban. The user accounts are synchronized when the tx_us_file command transmits the user account file to another iVPU. If the tx_us_file command is issued at the LMT master iVPU, the master user account file of the LMT master iVPU is transferred to every iVPU in the IP address list.
8
Network Management
The HA-iHLR uses the following network management interfaces in the iDEN network:
LMT Emergency Local Maintenance Terminal (ELMT) RMC SRM Backup and restore Command Line Interface (CLI) Provisioning transaction log file
LMT
The LMT is the web-based interface that operates and maintains the HAiHLR. The LMT has several management/administrator interfaces; the Maintenance Terminal Administrator (MTA), the Provisioning Client Manager (PCM), the Provisioning Interface Manager (PIM), the Network Operator Technician (NOT), the Network Operator Administrator (NOA), and the Migration Operator Group (MOG). The LMT contains different options for each management/administrator interfaces. See the Home Location Register (iHLR) User Manual (68P81001Y52) for specific information on each interface.
SR 16.0 05-Oct-07
The MTA manages the accounts and sessions for LMT users. The PCM is manages maintenance activities related to the provisioning clients. The PIM is responsible for the maintenance activities related to the provisioning interface, such as viewing provisioning session information and provisioning priority management. The NOT provides the first level of system maintenance and diagnostics. The NOA performs the highest level of customer system maintenance and diagnostics. And the MOG performs steps in preparation for a fleet migration. A text-based version of the LMT is available if needed through Lynx. Lynx is viewable through a web browser and utilizes a basic set of HTML elements. Since Lynx is HTML-based, the interface is viewable through ssh or a local console.
ELMT
The ELMT is used only when the LMT is inaccessible due to network outage or system failure. The ELMT has limited functionality and allows the user to start-stop the HA-DAP, shut down and reboot the HA-DAP, and start-stop the web server.
RMC
The RMC provides a command-line interface to monitor and control the system through console access. For additional information on RMC commands see the iDEN Commonly Used Commands for Troubleshooting and Restoration (6881012Y94).
SRM
The SRM provides a command-line interface to perform system management and maintenance tasks. The SRM prompt (P00>>>) is available when a node is in shut down status, or when the node is shut down. A console connection is needed to access a node when it is in shut down status. For additional information on SRM commands, see the iDEN Commonly Used Commands for Troubleshooting and Restoration (6881012Y94).
The Backup and Restore CLI allows the user to retrieve a file using FTP to perform a backup or restore. The Backup and Restore CLI is accessed by doing an rlogin on the HA-iHLR. The LMT web-based interface and database engine must be running in order for the Backup and Restore CLI to function properly.
A provisioning interface is available for the HA-iHLR based on the iDEN Provisioning Protocol (iPP). The provisioning client applications send provisioning transactions to the HA-iHLR to create, modify, remove, or query provisioning objects in the HA-iHLR database. Results of each provisioning transaction are provided in the Provisioning Transaction Log file if logging is enabled in the HA-iHLR LMT. See the Home Location Register (iHLR) User Manual (68P81001Y52) for more information.
SR 16.0 05-Oct-07
References
The following publications may be required to supplement the information contained in this chapter.
iDEN CPX1205 Common Platform Hardware Reference Manual (68P80801H50) iDEN Dispatch Access Controller Application Manual (68P80801H55)
These documents are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com unless otherwise stated.
iDAC Overview
The iDAC provides the routing of voice for Inter-Urban Dispatch calls across different Urbans. The iDAC communicates with DAPs, APDs and the OMCR in the same Urban, and with remote iDACs in other Urbans. An iDEN subscriber in one Urban area can make a Dispatch call to another iDEN subscriber in another Urban. When a DAP sets up an Inter-Urban Dispatch call, it specifies the local APD and the remote iDAC to route the voice. When the APD receives voice for a call, it routes the voice to the iDAC. When the voice reaches the remote iDAC, the iDAC converts and sends the voice to its local APD. The iDACs are deployed in the N+1 redundant configuration in each Urban, and have a high reliability PCI system that allows for the hot swap replacement of CPU and I/O boards and fans. The iDAC provides the following functionality in the iDEN network:
Sends and receives voice and control traffic to APDs and remote iDACs. Reports alarms, state changes, and statistics to the OMC-R. Maintains performance statistics and uploads the statistics when requested by the OMC-R. Transports voice packets between the iDAC and the SMs. Provides LMT interface to operate and maintain the iDAC.
One iDAC supports 15 DAPs (TCP/IP). One iDAC can support up to 32 APDs (one FR connection per APD). One Urban has at least two redundant iDACs available. One iDAC supports 800 remote iDAC (UDP/IP).
Hardware Description
The iDAC is comprised of the following main components:
CPU Board
The iDAC CPU board is equipped with one 10/100 Megabits per second (Mbps) ethernet port and one RS-232 serial port connector. The CPU board contains 192 MB Compact Flash and 256 MB of RAM. The PCI Mezzanine Control (PMC) module is mounted on the CPU board. The PMC module adds four 10/100 Mbps ethernet ports at the rear of the chassis. The PMC module has two LEDs per port: the GREEN LED indicates port status, and the YELLOW LED indicates link activity. There are also LEDs to indicate power, operational status, external faults and internal faults. See Figure 8-11 iDEN Dispatch Access Controller - Front View on page 8-55 for additional information on the iDAC. The CPU Transition Module (TM) is located in a slot at the rear of the chassis. The CPU TM provides four ethernet ports and two RS-232 ports at the rear of the chassis. The iDAC uses the ethernet port marked 1 for the 100 BaseT connection to the OMC-R and DAPs. Ethernet port marked 2 is for the 100BaseT connection to the remote iDACs. The other ports are not utilized.
The HSSI interface module carries up to 10 Mbps of traffic. The iDAC uses this interface to communicate with FR devices through the MPS. The iDAC provides a total of two HSSI ports per subsystem, and these ports are also located at the rear of the chassis. The HSSI port marked PORT 1 connects to the APDs. PORT 2 is not utilized.
Alarm Panel
The iDAC alarm panel provides a visible indication of the condition of iDAC hardware and software. The iDAC software uses the two alarm panel LEDs on the side of the CPU card. The LEDs are marked OUT SVC and IN SVC. Each iDAC system slot has a RED LED on the front panel that indicates the status of the iDACs downloaded software. The iDAC has a GREEN LED on the front panel that indicates the status of the iDAC application start-up and shut down cycle. See the iDEN Dispatch Access Gateway Application Manual (68P80801H55) for additional information.
SR 16.0 05-Oct-07
There are four hardware controlled LEDs that monitor the status of power supply voltages, fan speed, and on-board temperature sensors.The software controlled LEDs adjacent to each slot are software programmed to be RED, GREEN or YELLOW. Installed LEDs (INST) are programmed to be YELLOW, and indicate that a PCI card is in the slot and the driver is installed. The connection status LEDs are located in the two vertical panels of alarm LEDs on the side of the I/O cards. All the lights on the first vertical panel are RED in color. All the lights on the second vertical panel are GREEN in color. On the vertical alarm panel labeled DOWN, the LED can have a status of On or Off. On the vertical alarm panel labeled UP, the LED can have a status of On, Off, or Flashing. There should be only one LED ON or Flashing at any time for any pair of LEDs. Figure 8-11 iDEN Dispatch Access Controller - Front View
MCP750 CPU Module Slot Number
Front View Alarm/Exhaust Fan Module ESD Bond Point HSSI Module Power Supply/Inlet Fan Module
See Figure 8-12 for additional information on the iDAC. Figure 8-12 iDEN Dispatch Access Controller - Rear View
Dual Earth Ground Attachment Points
Rear View DC Input Plug ESD Bond DC Power Input Module Point Slot Number HSSI Transition Module
SR 16.0 05-Oct-07
Software Description
This section contains the following software related information:
The GTT-IIC ASCII file contains the iDAC-iDAC configuration data. The iDAC downloads the configuration information from the OMC-R and validates the information. If the remote iDAC is configured and validated, then the information is written to the file. If the iDAC-iDAC information is changed through On-Line Configuration Change (OLCC), this file gets regenerated after data is successfully validated.
The OMC-R provides the iDAC with software and configuration files. If the iDAC does not have a connection with the OMC-R, the previously downloaded executable and configuration files are utilized.
Network Management
The iDAC uses the following network management interfaces in the iDEN network:
LMT
The LMT helps to operate and maintain the iDAC system. This field-accessible login is available through the iDACs serial port.
The iDAC reports alarms, state changes, and statistics to the OMC-R to notify the operator when an error or warning condition is detected. The iDAC also maintains performance statistics and uploads the statistics when requested by the OMC-R. The OMC-R system status display console provides status on the iDAC, logical link status for iDAC-DAP links, container status for iDACDAP (DAP links), iDAC-APD (SM links) and iDAC-iDAC (WAN-links), and container status for the load.
References
The following publications may be required to supplement the information contained in this chapter.
These documents are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com unless otherwise stated.
APD Overview
The APD provides the duplication and routing of voice packets for the Legacy iDEN network. When a Dispatch call is initiated, the DAP sends the APD a routing table that contains all of the targeted subscriber IDs and EBTS sites associated with each mobile subscriber. When a Dispatch voice packet is received, the APD looks up the call ID in the routing table and performs the necessary duplication and routing. The APD after receiving voice packets from one site on a call, then duplicates and routes them to the other sites on the call. Packets are transferred between the APD and packet switches through four High-Speed Serial Interface (HSSI). The HSSI interface carries all non-OMC-R based packets (typically voice packets). The packets are duplicated and routed to the various HSSI boards for re-transmission to the packet switches. One of the HSSI interfaces is also configured through the OMC-R to carry the call control interface. Configuration information is sent by the OMC-R to the APD. The APD provides the following functionality in the iDEN network:
Duplicates and routes voice packets to each site on the call. Reports alarm and state information to the OMC-R. Provides the MMI interface to operation and maintain the APD.
500 simultaneous calls per HSSI port A total of 2000 simultaneous calls across all four ports 3200 EBTS sites, 800 per port (EBTS and iDACs included in total 3200 device capacity)
SR 16.0 05-Oct-07
Hardware Description
The APD is comprised of the following main components:
MTX board HSSI I/O boards APD power supply LED board
MTX Board
The MTX board provides the APD with the capability to receive and retransmit data packets through the HSSI I/O boards. The MTX board contains a processor, memory, a bus system, a 10BaseT ethernet interface, four PCI slots, and one RS-232 serial interface port. The MTX board interfaces with the HSSI I/O boards using a 32-bit PCI bus. The OMC-R exchanges status and control information with the MTX through a RJ45 twisted pair connector on the board. The MTX board interfaces with the LED board to provide system status information over four interfaces. These interfaces include the following:
HW STAT Sends signals to the LED board to indicate APD hardware status conditions such as processor run, LAN activity, and PCI activity. The interface also monitors push button interrupts from the buttons located on the LED board. PWR STAT Sends an OK status signal to the LED board. SOFT STAT Sends signals to the LED board to indicate the status of the HSSI ports. DCE The RS-232 Data Communications Equipment (DCE) port provides a service interface for the APD.
The APD contains four HSSI I/O boards, and the boards transmit and receive packets. The HSSI ports are controlled by the MTX. The HSSI I/O board contains a main card that interfaces with the PCI bus, and an HSSI daughter card that provides the HSSI interface. Four PCI slots interface with the four HSSI boards used by the MTX board. When a port is opened, the incoming packet is duplicated and transferred by the HSSI boards to the PCI bus and on to the MTX.
The APD power supply converts the power coming in from the site to several different voltages. The APD power supply receives -48 Volts -direct current (Vdc) site power through a front panel power/breaker switch. The power supply converts the -48 Vdc to regulated and filtered 12 Vdc and +5 Vdc. The 12 Vdc and +5 Vdc supply outputs apply to the MTX board. The +12 Vdc also powers two front panel fans. See Figure 8-13 Advanced Packet Duplicator - Front View on page 8-60.
LED Board
The LED board contains the APD front panel LED status indicators, push button switches, and a service access connector. The APD software uses four of the eight sets of port LEDs positioned on the front panel that correspond to the four APD ports. The LED board provides the following functionality:
HW LEDs The hardware LEDs include a RUN LED, LAN LED, and PCI LED. The signals for these LEDs are received through the HW STAT interface from the MTX board. And the POWER LED is enabled through PWR STAT interface from the MTX board. Front-Panel Push buttons The front panel push buttons include a Abort button, and a Reset button. These buttons send signals to the MTX board initiating abort or reset through the HW STAT interface. Software Controlled LEDs The software controlled LEDS include eight Installed and Active LED sets which correspond to eight MTX ports. (Only four of the eight sets are used in the APD; the other four are reserved.) The signals for these LEDs are received through the SOFT STAT interface from the MTX board. Also, an Alarm LED indicates if a hardware failure has occurred during hardware initialization. DCE The DCE interface from the MTX board is directly connected to the LED board to provide front-panel access to the DCE interface.
SR 16.0 05-Oct-07
Software Description
This section contains the following software related information about the OMC-R interface.
OMC-R Interface
The APD receives its configuration software from the OMC-R through an ethernet interface. The APD also reports alarm/state information to the OMCR through the same ethernet interface.
Network Management
The APD uses the Man Machine Interface (MMI) as the network management interface in the iDEN network:
MMI
The APD is equipped with an RS-232 interface that allows a field service terminal to serve as a MMI between field service personnel and the APD. MMI commands are used for the following system maintenance tasks:
Commissioning management Configuration management Device management Fault management Performance management (statistics) Resource management System management
The format of an MMI commands is case-sensitive. Exceptions to this requirement include function names, device names, and statistical parameter names. These exceptions must be entered in uppercase.
References
The following publications may be required to supplement the information contained in this chapter.
CPX Common Platform Hardware Reference Manual (68P81131E11) iDEN Surveillance Gateway (iSG) Application Manual (6881012Y17)
These documents are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com unless otherwise stated.
iSG Overview
The iSG provides law enforcement surveillance capability for the iDEN Dispatch and Packet Data subsystems. When provisioning surveillance in the iDEN network, each Urban has one or more pairs of iSG for redundancy. The iSG utilizes the iDEN network to retrieve surveillance information, and forwards it to the appropriate Law Enforcement Agency (LEA) where it is recorded in a Law Enforcement Monitor (LEM). The LEM connects to the iSG through the TCP/IP network, and the surveillance information is transmitted to the LEM. Before connecting to the iSG, the LEM must log into the iSG using the surveillance ID and password that was assigned by the warrant. The iSG also maintains a list of which IMSI subjects are under surveillance and what type of surveillance is being conducted. The iSG provides the following functionality in the iDEN network:
Provides surveillance capability for Dispatch and Packet Data subsystems. Retrieves surveillance information and forwards it to the LEA. Filters dispatch call data streams received from DAPs and APDs, duplicates as necessary, and routes to one or more LEMs. Maintains an IMSI listing for surveillance activities. Reports alarms, state changes, and statistics to the OMC-R.
SR 16.0 05-Oct-07
When an SU is under surveillance in their Urban area, the call is reported to the iSG with the originator as subject and the target as associate. If the originator is under surveillance in the targets Urban area, there is nothing reported to the iSG in the targets Urban area. When the target is under surveillance in the targets Urban area, the call is reported to the iSG with the target as subject, and the originator as associate. If the target is under surveillance in the originators Urban area, there is nothing is reported to the iSG in the originators Urban area. In the Legacy architecture, a voice path is setup by the DAP, APD and MPS to deliver voice over an HSSI FR connection to the iSG. The iSG converts the call information and the user voice to pass on to the LEA using VoIP. In the NGD architecture, a voice patch is setup by the iVPUd to deliver voice over an IP to the iSG.
Hardware Description
The iSG is comprised of the following components:
CPU High Speed Controller (HSC) board HSSI Adapter board Alarm panel DC power distribution panel Cooling panel
The iSG contains two separate subsystems in a single chassis for each Urban area. Each iSG pair is in an active/active configuration and provides surveillance on the identical set of IMSI. As a result, surveillance features remain available even if an individual iSG is removed from service.
CPU
The CPU board is a single-slot Compact PCI board equipped with the following features:
Error Correction Code (ECC) DRAM 3 Ethernet ports, 1 async port (COM1), 2 USB ports Compact Flash Module On-board debug monitor with self-test diagnostics and status LEDs Reset and Abort buttons Motorola PowerPC 750 processor
HSC Board
The HSC board connects to the CPU module through a local PCI bus. The HSC functionality provides high availability in the iSG, and maintains a control status register that contains information on the status of each module. The HSC boards provide a bridge between the two seven-slot Compact PCI buses so that they are managed by a single CPU module. The HSC monitors and controls the peripherals which includes power and fan sleds, board and system LEDs, and alarms.
The HSSI adapter board transfers data between a host computer and its WAN interface. The HSSI adapter board consists of a motherboard and a rear panel HSSI interface module, and contains the following components:
Dual local buses CPCI Bus and Interface Controller (CPCIBIC) Hot swap CPCI accelerator High Speed Data Link Control (HDLC)
The HSSI transition module is linked back to back to the HSSI adapter by a J5 connector. The transition module provides two HSSI (TIA/EIA-613) interfaces to the HSSI adapter. See Figure 8-14 iSG 1 and iSG 2 Cabinet on page 8-65.
Alarm Panel
The iSG alarm panel gives information about each of the boards, and the overall system. The three alarm panel LEDs are positioned above the CPU card on each iSG domain. Each alarm panel has three LEDs; OUT OF SERVICE, IN SERVICE, and IMPAIRED.
The DC power distribution panel is located at the rear of the chassis, and distributes the DC input power to the iSG power supplies. The DC power distribution panel supports redundant input power supplies to support high availability in the iSG. The DC power supplies are mounted on a sled along with the cooling fans. Power to both domains is controlled by the circuit breaker on the back of the chassis. Use this switch only during an emergency power down situation. Note The iSG should not be shut off without properly shutting down the system software.
Cooling Fans
Three fans provide forced-air cooling for the iSGs power supplies, CPU and I/O controller card cage and transition module card cage. Only two fans are needed to adequately cool the components, the third fan is part of the iSG system for high availability, and in case of a failure. The fans are mounted on the front of the fan/power supply sled.
SR 16.0 05-Oct-07
Software Description
This section contains the following software related information:
SPA
The SPA is configured by entering parameter values in the SPA web interface that manages and provisions Dispatch surveillance. The SPA database contains the warrant information, and is maintained in a secure location. If an iSG is permanently removed from service, consult with the SPA administrator to remove those impacted iSGs from the SPA database. The following parameters are configured through the SPA web interface:
All Urban area networks (identified by the Urban Area Network Code (UANC) An iSG pair that is associated with each UANC The IP address that allows SPA communication with the iSG for each iSG The IP address that allows LEM communication with the iSG for each iSG
iSG Software
The iSG is comprised of the iCCP hardware, and the iSG software. The iSG software is provided by Motorola. When the iSG software is loaded on the iCCP, the Dispatch surveillance feature set is available.
Network Management
The iSG uses the following network management interfaces in the iDEN network:
SPA LMT
SPA
The SPA is the web interface that manages and provisions Dispatch surveillance. The service provider uses the SPA to initiate surveillance on an individual or group based on information included in a warrant. The iSG uses an ethernet from the SPA to provide surveillance to the DAP.
SR 16.0 05-Oct-07
LMT
An LMT login is available through the iSG serial port by attaching a serial terminal device to the RS-232 port. Log in as field and the FieldMaint prompt appears. The field shell is now ready to receive commands. If you type EXIT, the shell session is ended. See the iDEN Surveillance Gateway (iSG) Application Manual (68P80802D75) for additional information on the LMT and associated commands.
References
The following publications may be required to supplement the information contained in this chapter.
GR Implementation Manual System Release MPS5.1 (68P80802A85) MPS Technical Reference Guide (68P80802A95)
These documents are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com unless otherwise stated.
MPS Overview
In the Legacy network, the MPS is a FR digital data packet switch that provides the interface for EBTS FR connections to the DAP, APD, and MDG. The MPS manipulates the paths used by Dispatch voice packets during a Dispatch call, and the data packet paths used during Packet Data networking. The DAP controls the source and definitions for routing and movement of voice and data packets. The MPS is implemented in a tiered architecture and routes signaling and control information between the DAP, MDG and the EBTS sites. The MPS provides the following functionality in the iDEN network:
Provides the FR interface to the DAP, APD, and MDG. Controls the overhead and manages the flow of voice and data packets between the Dispatch NEs. Routes voice and data packets for group Dispatch calls and network multicasts to and from the APDs to the correct destination.
There are two MPS platforms for the iDEN network: Broadband Packet Exchange (BPX)/Multi-Gigabit Exchange (MGX) 8850 and 8820, and the Internet Gateway Exchange (IGX). When the BPX is used in conjunction with the MGX, together they form the MPS complex. The BPX provides the following functionality in the iDEN network:
Broadband connectivity between MBX nodes. Fast switching of Asynchronous Transfer Mode (ATM) cells.
SR 16.0 05-Oct-07
FR is used in the Legacy iDEN network for dispatch, packet data, network management and billing services. The BPX, however, is an ATM switch, and the MGX shelf converts FR packets to ATM cells. The ATM cells are then routed by the BPX to another MGX connection to be transferred back to FR packets for use by another NE. The MGX provides the following functionality in the iDEN network:
Provides connectivity and an interface to the Dispatch NEs. Transfers the FR connections from a single ATM trunk to the BPX for switching. Attaches FR packets onto ATM packets, and routes to the BPX for delivery to the designated NE. Utilizes from 12,000 to 32,000 Permanent Virtual Circuits (PVCs) per the MPS complex.
Hardware Description
The MPS consists of the following components:
BPX MGX 8820 MGX 8850 Service Expansion Shelf (SES) PNNI
BPX
The BPX switch consists of a shelf with fifteen card slots that may be colocated with the MGX 8820, MGX 8850 and Service Expansion Shelf (SES) interface as needed. Three of the slots on the BPX shelf are reserved for two Broadband Controller Cards (BCCs) that run the AutoRoute or/and Private Network-Network Interface (PNNI) software, and one Alarm Status Monitor (ASM) card. One of the BCCs is designated the active processor, while the other BCC is the standby. The other twelve slots are general purpose slots for network interface or service interface cards. For more information, see Figure 8-15 MPS Cabinet Lineup on page 8-71. The BCC is the central processor for the BPX switch. The BCC runs system software and manages other network cards. The BCC stores configuration databases and system software for the BPX. Network clocking is also provided by the BCC. The Broadband Switch Module (BXM) card provides the FR connection on the MGX shelf and supports lines or trunks out of the BPX switch.
MGX 8820
The MGX 8220 shelf (version 5.0.20) is comprised of a card cage with 16 slots.A backplane runs across the card cage spanning all 16 slots which allows for a function module card to be inserted in the slot from the front, and a smaller line module card to be inserted in the slot from the back. The front row is reserved for function modules as they perform complex tasks such as FR to ATM conversion. The back row is used for line modules, and these cards provide the interface to the MGX 8220 shelf. In the MGX 8220 shelf, slots 1 and 2 are reserved for Broadband Network Module (BNM) cards, and slots 3 and 4 are reserved for Axis Shelf Controller (ASC) cards. Slots 15 and 16 are reserved for the SRM. Slots 1, 3, and 15 work together as an active/standby set. The same is true for slots 2, 4, and 16. Slots 5 to 14 (10 slots) are available for Service Modules (SMs). SMs occupy the other 10 slots and provide communication and transfer capability.
MGX 8850
The MGX 8850 shelf (version 1.3.11) is comprised of a card cage with 32 slots. Slots are 7 and 8 are reserved for the primary and redundant PXM1s, and slots 15, 16, 31, 32 are for the SRMs. The PXM card module is the central processor of the MGX 8850 and is responsible for switching cells, and storing configuration and firmware images for the other card modules in the shelf. The PXM supports two back cards: one for shelf management, and one for trunk and line interfaces. All PXMs must have a PXM-UI installed in the upper-bay back card slot. The lower-bay back card slot houses the trunk/line interface card. The maximum number of slots designated for service modules is 24, unless double height cards are utilized. Slots 7, 15, and 31 (PXMs and SRMs) work together as a card set. The same is also true for slots 8, 16, 32. The slots on the top half of the card cage are reserved for T3/E3 and OC-3 cards as the higher bandwidth cell buses reside in the upper portion of the backplane.
SES PNNI
The SES PNNI Controller contains two PXMs that run the PNNI software (version 4.0.15). One of the PXMs is the active processor, while the other serves as the standby. The PNNI controller connects to the BPX switch by the ATM/OC-3 interface. For every installed PXM front card, there is a corresponding pair of PXM back cards. A pair of PXM back cards consists of the following:
User interface back cardThe PXM-UI along with the following: ethernet port, RS-232 maintenance port, RS-232 control port, T1/E1 timing reference ports, and a audio and visual alarm interface port. ATM trunk interfaceThe PXM ATM uplink provides line drivers for the uplink interface. For SES PNNI application, the PXM ATM interface uplink card uses a single port from the quad single-mode port back card.
SR 16.0 05-Oct-07
Software Description
This section contains the following software related information:
The SES node architecture consists of the BPX switch and the SES PNNI Controller. In a iDEN network, the SES PNNI Controller provides the signaling and routing information to the BPX BXM uplink trunks. The PNNI is a Soft Permanent Virtual Circuit (SPVC) that connects the FR channels on the MGX shelves. The FR channels of the MGX convert FR data to ATM cells. A connection on the SES allows the two endpoints of the iDEN PVC to communicate through the ATM network. The BXMs route the ATM cells from one MGX shelf to another. The PXM is the central processor for the SES PNNI Controller and provides ATM Switched Virtual Circuit (SVC) routing and signaling for WAN switching networks. The PXM runs the system software and maintains the trunk to the BPX. In the SES controller, SNMP implementation, a master agent resides on a PXM card. A subagent also resides on the PXM to support the PNNI application.
The MGX 8820 is a modular, software-based system architecture that enables the switch to support new features through downloadable software upgrades or new hardware modules. The MGX 8820 platform converts incoming data to ATM cells for transport over the ATM network. A redundant bus architecture provides high capacity, and application performance.
The MGX 8850 switch utilizes a universal edge architecture and supports a wide range of services over narrowband and mid-band user interfaces. The MGX 8850 maps all traffic to and from the ATM. The MGX 8850 converts all user information into ATM cells for transport over the ATM network. The interfaces available for user traffic include the following:
FR to ATM network interworking and service interworking IP-based virtual private network Video Circuit emulation services for private line replacement
SR 16.0 05-Oct-07
Network Management
This section contains the following:
The CWM (version 15.1.50) provides the Graphical User Interface (GUI) that allows the user to select a management tool to operate and maintain the MPS. The CWM provides network management, element management, and fault management. The CWM provides an interface to monitor and configure the BPX/MGX, and a collection generation service that allows a user to add connections to the MPS NEs. The CWM also utilizes BLAST 4.0 which is a bulk provisioning tool that sets up all parameters for the user.
MPS Redundancy
The following MPS components use redundancy to manage resources in the iDEN network. BPX For protection against hardware failure, the BPX switch shelf is equipped with the following redundant components: common control modules, crosspoint switch matrixes, high-speed network interface cards, power supplies, and high-speed data and control lines. MGX 8820 The MGX 8820 cell bus contains two pairs of uni-directional buses for redundancy. The backplane also contains a bus for support of N+1 redundancy. The MGX 8820 supports one or two Power Entry Modules (PEMs) and these modules provide power supply redundancy for a fully loaded MGX 8820 shelf. The SRM card controls redundancy in any of the service modules. There is one redundant card per MGX shelf that connects to EBTS sites. This one card maintains N+1 redundancy for all the T1/E1 line cards in the shelf.
MGX 8850 The MGX 8850 backplane contains a main system bus, and a cell bus for communication between card slots. The cell bus contains two pairs of unidirectional buses for redundancy. The backplane also contains a bus for support of N+1 service module redundancy. The MGX 8850 supports one or two PEMs which are connected to their own independent 48V DC supply. These modules provide redundancy, and are capable of supplying enough power for a fully loaded MGX 8850 shelf. The MGX 8850 also has APS and SRM redundancy. SES The SES node architecture includes the BPX switch and the SES PNNI Controller. The SES node architecture is completely redundant and offers a hitless active/standby switchover.
CLI
The CLI is used for low-level configuration, and access to the BPX, MGX and SES.
SR 16.0 05-Oct-07
Chapter 9
Packet Data Subsystem Details
In This Chapter Topic See Page
Purpose. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-3 Packet Data Subsystem Description . . . . . . . . . . . . . . . . . . . 9-4 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-4 Packet Data Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-4
Next Generation Dispatch Configurations ...................................... 9-5 All Frame Relay (AFR) ...................................................... 9-5
MMI Commands ........................................................................... 9-13 RAG Downloads........................................................................... 9-13 Event Logs ................................................................................... 9-13
SR 16.0 05-Oct-07
Purpose
This chapter describes the major network elements of the Packet Data subsystem in terms of their key hardware components, software function, and network management facilities. Each key component is explained, including its key functions, hardware and software descriptions, and the network management.
Subscriber Unit (SU) sends and receives data directly. Enhanced Base Transceiver System (EBTS) determines the call as packet data and converts the radio link to data packets. Metro Packet Switch (MPS) routes data packets. Dispatch Application Processor (DAP) determines services and location. Mobile Data Gateway (MDG) routes the data to and from the Internet. Billing Accumulator (BA) collects time and bit-transfer information for billing as desired by the service provider.
The equipment used to control, monitor, and administer the system is primarily internal to the iDEN system. The packet data process follows the same procedures as dispatch call processing. The exception to this procedure is the billing information collection and external routing of the MDG or any servers that may be part of the service provider hosting network. Packet Data provides additional data services using the Dispatch subsystem. Packet data provides the user access to Intranet and Internet applications using standard Internet protocols. The Packet Data subsystem uses portions of the Dispatch subsystem along with its own network elements to provide packet data services. The Packet Data subsystem consists of the following equipment:
MDG - interfaces to the Internet, converting. iDEN protocol to the Internet Protocol and vice versa. The MDG also functions as the Foreign Agent (FA). BA - collects packet data billing information.
SR 16.0 05-Oct-07
HA (Home Agent) - routes packets from the Internet to the correct MDG for delivery to the subscriber unit. Central LAN - provides for the routing of private network connections within the Dispatch and Packet Data Subsystems and also utilizes public network connections to other systems and networks.
The Next Generation Dispatch provides higher capacity and greater system reliability in a significantly smaller footprint. It is characterized by All Frame Relay (AFR). See Figure 9-1 below.
NGD System
All Frame Relay (AFR) A site configuration is characterized by all traffic being carried on a single Frame Relay channel. This channel must terminate on an iVPUdi.
References
The following is a list of technical documentation and training courses that support the MDG2:
Packet Data Fundamentals Course (FND107) Packet Data Operations & Maintenance Course (OPM208) iDEN System Recovery Topics Course (ADV401) MDG System Manual SR12.0 (68P80800B95) iDEN Packet Data Service Registration and Call Processes (68P80801C90) Overview of iDEN Packet Data RF Operation and Performance (WP2000002) Understanding iDEN System Fault Tolerance Packet Data Subsystem (WP2002-023) MDG Command Reference (68P80802D30) Remote Access to 64K MDG MMI (WP2003-020) Packet Data Subsystem Troubleshooting Guide (6881012Y82) Packet Data Troubleshooting Equipment List for 64K MDG (WP2003-019) SR13.4 Packet Data Health Check for SR 13.4 (HC06005)
These documents are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com unless otherwise stated.
MDG2 Overview
The primary function of the MDG is to manage the overall process of Mobile IP. The MDG is an enterprise-level switching router that directs connections to the Internet. This device is rack mounted and has a number of ports available that vary according to the provisioning and growth planning of the individual iDEN system. Key points about the MDG include that it:
Is the essence of Packet Data functionality within the iDEN system infrastructure. Is the interface to the Internet and the World Wide Web for the iDEN system during packet data operation.
SR 16.0 05-Oct-07
Is a highly reliable and fault recoverable system entity. Has all software and configurable parameters that are downloaded from the OMC (Operations & Maintenance Center).
Mobility Management
Assists DAP in SU registration for packet data, establishing parameters, working records, and care-of-address information Acts as the Mobile IP foreign agent handling IP registration and decapsulation Responsible for performing packet data reconnect for SUs after each cell change
Accounting - Maintains a count on incoming/outbound datagrams and transfers this data to BA for customer billing Encryption
Datagram Delivery - Delivers inbound datagrams after de-capsulation by the Foreign Agent function Session State Control - Executes session state transitions based upon:
Response or lack of response to paging Expiration of the session timer Notification from the DAP Notifies the DAP whenever an SU session experiences a state change
The MDG works as a router and a protocol converter but it has no hard drive or means of mass storage. As it collects billing data on the subscribers on its Resource Allocation Group (RAG), it must pass that billing data to a BA for storage and later processing. See Billing Accumulator on page 9-24 for more details.
The MDG dual 9-slot chassis is equipped with two sets of key switches and indicator lights to control power to the respective card cages.
A three-position key turns the power supply outputs on or off. The RUN position is on, the STANDBY position is off. The key can be removed in either position to lock the switch in that setting. If the key is in the RUN position when the chassis is plugged in or power is restored, the power supply cycles on immediately. The key employs a standby power arrangement. With the key in the STANDBY position, power to the card cage is removed, but the primary side of the power supply remains energized. The RESET position supplies a reset signal to the backplane to reset the system in case of a malfunction. RESET is a momentary-contact setting that is available only when the key is inserted in the switch.
MDG2 Boards
There are five separate VME CPU cards within the MDG as described below. See Figure 9-2 for the board placement in the chassis. Figure 9-3 Actual MDG2- Front View on page 9-10 provides an actual photograph of the MDG2 without the front cover. Figure 9-2 MDG2 - Front View
SR 16.0 05-Oct-07
MDGp Board The MDGp board is the main processor of the MDG. It is powered by a single 350Mhz PowerPC 750 processor with 256MB of RAM. This board provides an ethernet IEEE 802.3 (10BaseT) interface via the port connection panel. This interface is used to connect the MDG to the IP network, additionally it is used to send subscriber accounting summaries to a billing accumulator. The three instances of the MDGecp boards are powered by the same hardware as the MDGp board. MDG Encryption Compression Processor Boards The MDG Encryption Compression Processor (ECP) boards provide encryption and compression functionality. There are three MDG ECP processor boards. The ECP is a single processor CPU board with a 350 MHz PowerPC 750 processor with 256 MB RAM. The ECP boards on a backup MDG (N+1 MDG) are used to store mirror images of the Packet Data VLR information of all active MDG nodes within the same MDG cluster. MDGio Board The MDGio board provides serial connectivity to the Frame Relay network. It supports from one to four 2MBPS V.35/HDLC ports running the Frame Relay packet switching protocol. The MDGio processor board is a single processor CPU board with a 350 Mhz PowerPC 750 processor with 256 MB RAM that also supports a PCI Mezzanine Card (PCM) with two 68360 QUICC processors. The I/O connectivity is provided via the transition module. The MDGio board also provides an ethernet IEEE 802.3 (10Base T) interface which is used to connect to the iDEN IP network. PCI Mezzanine Card A double-wide PCI Mezzanine Card (PMC), powered by two 68360 QUICC communication processors, attaches to the baseboard.
Figure 9-3
Power Supply Modules DC power for the backplane, fans, and drives is furnished by a UL/CSA/ VDE-approved 700W autoranging power supply module. Both -48Vdc- and 115/230Vac-compatible versions are available, each providing +5 Vdc, +12 Vdc, and -12 Vdc to the system (with power factor correction in the AC version). The power supply modules are installed below the VME module card cage. Each has its own set of fans to supply forced-air cooling. The power supply modules are not adjustable when they are installed in the system.
SR 16.0 05-Oct-07
Several components of the MDG2 are replaceable in the field. Refer to the MDG System Manual SR12.0 (68P80800B95) for details on the specific procedures and related information. The following MDG2 components are replaceable in the field:
XR DC Power Supply XR System Air Filter SMM762 4 port V.35 Transition module MVME2434 750PPC-350Mhz with 256MB of memory with PCI Mezzanine Card (PMC) MVME2434 750PPC-350Mhz with 256MB of memory XR fan module
9
OMC-R MMI Commands Alarm Indicators and Controls RAG Downloads Event Logs Automatic Fault Recover Process
OMC-R
The MDG receives its software download from the OMC-R. The OMC-R serves as a file server for storing the software load required by each MDG node to function. The software load is downloaded to an MDG node by using the Proprietary File Transfer Protocol (PFTP) for both the MDG node bootstrap download and online download. The SR 14.0 and later MDG2 is capable of communicating with OMC-R only over IP. During MDG boot, the MDG executes ROM code and establishes a connection to the OMC-R. The MDG will then begin the PFTP procedures. The MDG may download files based on several conditions:
MDG does not have the file or contains a corrupted file MDG detects a version mismatch MDG detects the OMC-R is forcing a file download
The MDG 21-slot chassis is equipped with two sets of key switches and indicator lights to control power to the respective card cages (System A and System B). Input Power Switch A rocker switch turns the power supply outputs on or off. Turning the switch On (I) supplies power to the platforms power supplies. Turning the switch Off (O) removes input power from the platforms power supplies. Indicators Several indicators display the system status, slot status, status of power supply, and fan. They are located on the front panel of the card cage and power supply.
The alarm display panel is located on the front of the chassis and provides LEDs and alarms for each slot of the chassis. The top row displays red LEDs for Out of Service activity, the second row displays green LEDs for In Service. There are also three system alarm LEDs to indicate system status. Each of three power/fan modules are equipped with four status LEDs on the front panel. In the event of a Fan/Filter Out of Service alarm, first check the fan filter to make sure the airflow is not obstructed.
SR 16.0 05-Oct-07
MMI Commands
The MDG does not require a user name and password to be entered before using the MMI. There are 3 levels of commands that can be executed:
To access the MDG RAM software, login to any of the Man Machine Interface (MMI) ports on the MDGio, MDGp, or any of the MDGecp modules. Basic commands are available on all the MMI ports. There are, however, commands specific to each module in the MDG. These commands can be accessed using the help function for each individual board.
RAG Downloads
A Resource Allocation Group (RAG) download is the process by which the RAG assignment must be moved from an Active MDG to a Standby MDG. This process happens in many different situations, but is vital to the continued service of Packet Data in a market. Without a RAG assignment, Packet Data will not function in the iDEN system. This process if often referred to as a "RAG Swap".
Event Logs
Through the use of the OMC-R console it is possible to navigate the event logs on the OMC-R through a GUI interface. The process to view the OMC-R event logs from the command prompt of the OMC-R MMI box is described in the OMC-R System Administrator Manual (6881001y44). The location of the event logs is on the MMI processor. These logs are updated to the SYS processor but should always be viewed from the MMI processor.
References
The following is a list of technical documentation and training courses that support the MDG4:
Packet Data Fundamentals Course (FND107) Packet Data Operations & Maintenance Course (OPM208) iDEN System Recovery Topics Course (ADV401) SR13.4 MDG4 Hardware Installation Manual (68P81001Y76) Packet Data Troubleshooting Equipment List for 64K MDG (WP2003-019) SR13.4 Packet Data Health Check for SR 13.4 (HC06005)
These documents are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com unless otherwise stated.
MDG4 Overview
The MDG is a network element that provides Mobile subscribers packet data functionality within the iDEN system infrastructure. The MDG4 supports up to 600K subscribers per MDG4 node. The scalability of the MDG4 node provides the ability to address four subscriber targets: 256K, 384K, 512K, and 600K subscribers. See Figure 9-4 MDG4 in the iDEN Network on page 9-15. An MDG4 cluster is composed of multiple (two or more) MDG4 nodes. There are N+1 MDGs in an MDG4 cluster. The actual value of N may be affected by the number of RAGs serviced by the MDG4 cluster. Two levels of redundancy are used in MDG4 architecture, one at the MDG4 node level and one at the MDG4 cluster level. The iDEN packet data architecture supports N+1 redundancy for MDG4 nodes. One of the MDG4 nodes has a backup role. It becomes active in the event of the failure of an active MDG4 node and assumes the responsibility for the set of users previously served by the failed MDG4. The Packet Data VLR Mirroring feature allows the backup MDG4 node to automatically reconstruct the VLR information of the mobile users at the time of recovery once the packet data VLR mirror is established. The MDG4 acts as Mobile-IP Foreign Agent (FA). Mobile-IP is a standard IP protocol which achieves mobility through the cooperation of three entities: Mobile Node (MN), Home Agent (HA) and the FA.
SR 16.0 05-Oct-07
The MDG works as a router and a protocol converter but it has no hard drive or means of mass storage. As it collects billing data on the subscribers on its Resource Allocation Group (RAG), it must pass that billing data to a BA for storage and later processing. See Billing Accumulator on page 9-24 for more details. Figure 9-4 MDG4 in the iDEN Network
Alarm Management Controller (AMC) cards IP Switch Board (IPSB) cards The three types of logical blades in the MDG4 architecture include:
VLR Manager card (MDGv) Data Processing card (MDGd) Communication card (MDGc)
The remaining slots of the chassis are populated with the payload cards listed above. There will be four different platform configurations supported: MDG4 node supporting up to 256K users, 384K users, 512K users, and 600K users. Alarm Management Controller Card Two Alarm Management Controller (AMC) cards operate as 2N mated pair, running Active/Standby. The AMC controls the display of the chassis LEDs and relays. The AMC relays system status information to system management software by way of an Ethernet link. The system management software evaluates the status inputs and responds as appropriate. IP Switch Board Cards Two IP Switch Board (IPSB) cards operate as 2N mated pair, running Active/ Standby. The IPSB card acts as a layer 2 switch and/or layer 3 router, and provides the MDG4 node with Ethernet and IP switching. It also includes routing functionality within the chassis, between the chassis and the packet data IP network, and between the chassis and the iDEN IP network.
SR 16.0 05-Oct-07
MDGv Card The MDGv card is used for storing and managing packet data VLR information. Other features of the MDGv card include:
The MDG chassis/application manager and the Master Agent of the node also reside on the MDGv card. The two MDGv cards in the MDG node operate as 2N mated pair, running Active/Standby. The packet data VLR information, the MDG Cluster Management information, and the chassis management configuration data are checkpointed between the two MDGv cards. The active MDGv card has an interface to the iDEN IP network which is used for communication between the MDG nodes in the same packet data region, communication between the MDG node and the Billing Accumulators, and communication between the MDG node and the OMC-R.
MDGd Card The MDGd card is used for packet data processing including packet data encryption and compression functionality. Additional features of the MDGd card are as follows:
There are N+1 MDGd cards, operating in N active and 1 standby mode. There is no checkpointed data between these MDGd cards. All the packet data session-related information previously managed by the failed MDGd card will be lost (all active packet data sessions will become dormant) as a result of MDGd card switchover. Other information generated by the failed card (such as statistic information and billing information) might also be lost as a result of MDGd switchover. Each active MDGd card has an interface to the PD IP network.
MDGc Card The MDGc card provides iDEN I/O connectivity with all the cell sites (up to 3200 sites) and the DAPs. A modular I/O interface (plug in module) is built into the MDG4 architecture to support: HSSI, High Speed V.35, and DS3 (T3) interfaces. Each of the MDG cards are located in the appropriate range of chassis slots according to card type:
Slots 1-2 for V cards Slots 3-8 for D cards Slots 9-18 for C cards
Figure 9-5
SR 16.0 05-Oct-07
Figure 9-6
System management software receives system status information from the Alarm Management Controller (AMC) board by way of an Ethernet link. The system management software evaluates the status inputs and responds as appropriate. The system management software has the task of making all decisions pertaining to the state of the system and transmitting them back to the AMC board as long as the AMC-CM (Configuration Management) link exists.
OMC-R MMI Commands Event Logs RAG Downloads VLR Mirroring MDG Cluster Management/Fault Recovery Load Shedding
OMC-R
In SR 14.0 and later, the MDG/OMC-R communication switches from FR to Internet Protocol (IP).
OMC-R will act as the TFTP server Upgraded MDGs request new IP bootstrap code and IP load from the OMC-R
The latest enhancements to the OMC-R include the Network Views graphical display and the LDM Download Manager:
Network Views - offers a graphical display, with each network element identified by type, name, and status.
Colors indicate element status, including green for OK, yellow for impaired status, and so on. Network Views supports filtering icons based on element status.
Elements selected for download are moved to a download queue and the download process immediately started. Download rules are built into the package to minimize network disruption. Extensive warnings alert the operator about network impacts.
SR 16.0 05-Oct-07
MMI Commands
The MDG does not require a user name and password to be entered before using the MMI. There are 3 levels of commands that can be executed:
Basic commands are available on all the MMI ports. There are however, commands specific to each module in the MDG. These commands can be accessed using the help function for each individual board. User Interface Menu These logins allow the user to easily status the MDG4 and perform specific functions via menu selection. For the user password, reference should be made to the release notes for the MDG4 version being used. See Figure 9-7 below for an example of MDG4 menu.
Figure 9-7
Event Logs
Through the use of the OMC-R console it is possible to navigate the event logs on the OMC-R through a GUI interface. The process to view the OMC-R event logs from the command prompt of the OMC-R MMI box is described in the OMC-R System Administrator Manual (6881001y44). The location of the event logs is on the MMI processor. These logs are updated to the SYS processor but should always be viewed from the MMI processor.
RAG Downloads
The Resource Allocation Group (RAG) download is also used by MDG4. This is the process in which the RAG assignment must be moved from an Active MDG to a Standby MDG. This process happens in many different situations but is vital to the continued service of Packet Data in a market. Without a RAG assignment, Packet Data will not function in the iDEN system. This process if often referred to as a "RAG Swap".
VLR Mirroring
The Packet Data VLR Mirroring feature allows the backup MDG4 node to automatically reconstruct the VLR information of the mobile users at the time of recovery once the packet data VLR mirror is established.
MDG Cluster Management is a new concept in the MDG4 architecture that replaces the Automatic Fault Recovery Process (AFRP) in the MDG2 architecture. MDG Cluster Management is managed by a mechanism resident to the pool of MDG4 nodes within the MDG4 cluster itself without OMC-R involvement. The MDG4 Cluster Management consists of two procedures: Election Process (to elect an MDG node to be the Cluster Coordinator) and Resource Distribution Process (to distribute RAGs among MDG nodes in the cluster). The N+1 redundancy model within an MDG cluster is maintained to handle failures which are not recoverable within an MDG node or to handle operator initiated switchover. The following is a list of triggering events for an MDG node switchover:
Lost communication to PD IP network Lost communication to all DAPs connected to the MDG node Loss of more than 25% of MDGc cards functionality
SR 16.0 05-Oct-07
Load Shedding
The load shedding mechanism involves memory usage, CPU usage, and possibly queue sizes as a trigger. Alarms will be used to indicate when the trigger points are reached. The load shedding algorithm uses three major priority categories, including:
Maintenance messages (including messages related to RAG management and fault management) Fault management Location tracking of Subscriber Unit (SU)
Packet Data traffic is the lowest priority with SU not in session at lower priority than SU in session.
Billing Accumulator
References The following is a list of technical training courses that support the BA:
Packet Data Fundamentals Course (FND107) Packet Data Operations & Maintenance Course (OPM208)
9
BA Overview
The main function of the BA is to:
Collect and store billing records that are produced by the MDG. Provide an avenue for the stored billing records to be transferred to an outside billing center.
Similar to call data records from the MSC, this information is used by the service provider to gather usage (time and/or data transferred) and similar account information. There are two Billing Accumulators in the Packet Data subsystem (See Figure 9-8 Billing Accumulator Mirror Function on page 9-25):
Each consists of a CPU board, Ethernet board, and System Monitor board One BA is the Primary and the other is Secondary Each MDG is connected to both Billing Accumulators Data is sent to the Primary BA, which mirrors the billing data to the Secondary BA If the Primary BA becomes unavailable, the Secondary BA takes over
SR 16.0 05-Oct-07
Figure 9-8
The BA is a Unix-based computer designed to collect billing data from the MDGs. In order to allow connection to the MDGs, the BAs are connected through the Local Area Network.
BA Hardware Description
BAs work in pairs and burn secure connections to MDGs constantly. If a TCP/IP connection is lost to an MDG it is often the BA that turns in the failure to the OMC. BAs have one Ethernet port each and can be accessed via telnet if needed. The BA consists of the following boards (as shown in Figure 9-9 Billing Accumulator - Front View on page 9-26):
Power Converter Module CPU Board Ethernet Board System Monitor Board SCSI Hard Drive
Figure 9-9
Power Converter Module The Power Converter is a modular unit that slides into the BA. The front of the power converter contains voltage level indicators, voltage test points, and the power converter switch. CPU Board The CPU Board is the main processing board in the BA. It also has a Small Computer Systems Interface (SCSI) bus interface. Ethernet Board The Ethernet board provides a communication path between the VME bus and the devices (IRs, Hubs, and MDGs) on the LAN. System Monitor Board The System Monitor Board (SMB) monitors the BA for errors and issues visual and audible alarms in case of problems.
SR 16.0 05-Oct-07
SCSI Hard Drive The disk is a Modular Disk Subsystem unit that is used for storing information. It has a removable aluminum case that houses the hard disk drive and transition board. The BA back panel consists of the following boards (as shown in Figure 9-10 below):
I/O Communications Transition Board Ethernet Port Connector Board Main Serial Port Transition Board Input Power Transition Board
I/O Communications Transition Board The I/O Communications Transition board provides an internal connection for 2 telephone lines and 5 serial communication ports. These serial ports can be used to connect external modems or communications devices using standard DB-25 or 26 pin connectors. Ethernet Port Connector Board The Ethernet Port Connector board provides the Ethernet communication connection.
Main Serial Port Transition Board The Main Serial Port Transition Board provides an interface between the CPU and peripheral devices. This board also provides proper termination by attaching a SCSI Terminator Cap to the right of the printer connection. Input Power Transition Board The Input Power Transition board Provides an external connection from a 48 volt DC power source and also performs the following functions:
Filters input power Provides an in-line fuse Provides an internal board connection for supplying power to a BA resident converter unit Generates airflow through the BA
BA Software Description
The BA uses the AIX UNIX operating system. The BA applications processes run on top of AIX. The AIX operating system and the BA application automatically boot when you power up the DCMS, allowing you to use the system. The BA receives usage detail records from the MDG nodes using Billing Message Protocol (BMP) and Billing Session Protocol (BSP), proprietary protocols on top of TCP/IP. The data is stored on a hard drive until compilation. Data Files The BA will write data into a raw data file every hour. This raw data file will be exchanged with the other BA and then forms the input for the processing of data by the Processing Task (PRC). Once the data files are processed, they can be polled from the server. These files have a Pr prefix in the name.
SR 16.0 05-Oct-07
BA Network Management
The BA operations and maintenance functions are performed using the following:
OMC-R Alarms
OMC-R
The MDG will generate alarms at the OMC-R when various problems are indicated on the BA.
Alarms
BA errors can range in severity from minor faults to critical situations that can cause degradation and put billing information at risk. The System Monitor Board will help identify various problems by generating an alarm. Activated FAILED LEDs indicate problems, such as a power interruptions or a faulty boards, on the individual boards.
NOTES...
SR 16.0 05-Oct-07
Chapter 10
Network Management Subsystem Details
In This Chapter Topic See Page
SR 16.0 05-Oct-07
Purpose
10
This chapter describes the Network Elements (NEs) that comprise the Network Management subsystem. Each NE contains information on the key hardware components, software function, and network management facilities. The following NEs are supported in the legacy Network Management subsystem:
Legacy OMC-R Enhanced Global Title Translation Tool (EGT4) Cisco WAN Manager
The following NEs are supported in the next generation Network Management subsystem:
10
References
The following publications may be required to supplement the information contained in this chapter.
OMC-R System Overview (68P81001Y90) OMC-R Configuration Management Parameters (68P81001Y92) OMC-R Configuration Management Users Guide (68P81001Y91) OMC-R Event/Alarm Management (68P81001Y94) OMC-R Online Change Configuration Users Guide (68P81001Y97) OMC-R Performance Management (68P81001Y95) OMC-R System Administrator Guide (68P81001Y93)
These documents are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com unless otherwise stated.
OMC-R Overview
The OMC-R is the control center that provides system configuration, performance monitoring, and status monitoring for the NEs that comprise the iDEN network. The OMC-R however, does not provide functional operations for Dispatch, Interconnect, or Packet Data services. Each OMC-R provides the following functionality in the iDEN network:
10
Coordinates uploads/downloads to the NEs that interface with the OMC-R. Provides event logging and display and site status display for the NEs. Coordinates system configuration and validation, system configuration reports, IOS operation, and MDG fault tolerance for the entire Urban. Administers user accounts at both the primary and secondary OMC-R.
In a typical configuration, the OMC-R manages up to 500 sites and is located in a Mobile Switching Office (MSO). For an installation with more than 500 sites, the sites must be distributed among multiple OMC-Rs to efficiently manage the traffic and network requirements.
SR 16.0 05-Oct-07
Hardware Description
The legacy OMC-R is comprised of the following main components:
10
Operations and Maintenance Links (OML) OMC-R system processor Man-Machine Interface (MMI) processor X-terminal clients
OML
Each of the NEs are connected to the OMC-R by X.25, ethernet, or frame relay. These connections are called OMLs. The OMLs are duplex connections that carry configuration information to each NE, as well as information regarding the status of that NE. The following links are utilized with the OMC-R:
LAN Each OMC-R consists of a system processor, a MMI processor, Xterminals, and a timing server. These modules are linked by an ethernet LAN.
Metro Packet Switch (MPS) links Each NE connects to the MPS using T1 or E1 facilities. Two timeslots in one of the T1 span lines is used for OMC-R network management communications. Requests for data, configuration commands, and other OMC-R data is transferred through two timeslots. The Network Access Server (NAS) converts V.35 physical protocol in the OMC-R links to the T1 or E1 physical protocol used in the BSC network elements.
Simple Network Management Protocol (SNMP) links Status and event reporting across the OML utilizes the SNMP. Ethernet maintenance links NEs report status information across the ethernet network using Internet Protocol (IP).
The OMC-R system processor is the communication link, resource manager and server for the OMC-R. The following platforms are available for the OMC-R system processor:
Sun Fire 4800The Sun Fire 4800 contains two 1.2 Gigahertz (GHz) CPUs, 4 Giga-Byte (GB) Random Access Memory (RAM), 73 Gbyte mirrored hard drives, and other miscellaneous equipment.
See Figure 10-1, and Figure 10-2 Sun Fire 4800- Rear View on page 10-7 for additional information.
SR 16.0 05-Oct-07
Enterprise 3500The Enterprise 3500 contains one to three 400 Megahertz (Mhz) Central Processing Units (CPUs), and increased storage capacity for medium to large installations, and other miscellaneous equipment (see Figure 10-3 and Figure 10-4 Enterprise 3500- Review View on page 10-9.
The Enterprise 3500 supports the following: Up to 800 X.25 sites 6 Dispatch Application Processor (DAP) cluster 8 active Mobile Data Gateways (MDGs) and 1 standby MDG Figure 10-3 Enterprise 3500- Front View
Key:
HD=HARD DISK
SR 16.0 05-Oct-07
Key:
SB=SYSTEM BOARD HME=HUNDRED MEGABIT ETHERNET
MMI Processor
The MMI processor provides the functionality for user management, security, resource allocation, other operations and maintenance type procedures. The MMI processor may exist as a single primary installation in a smaller Urban, or as a secondary MMI processor in a larger network. For more information on the MMI processor, see the OMC-R System Administrator Guide (68P81001Y93). The following platforms are available for the MMI processor:
Netra 20The Netra 20 contains a 360 Mhz CPU module with a 4 Megabyte cache, 256 or 512 Mbtye of RAM, either two 9.1 GB internal mirrored hard drives or two 36.1 GB internal mirrored hard drives, DVD and tape drives, and other miscellaneous equipment. Ultra 60The Ultra 60 contains a 1.2 Ghz CPU module with a 4 Megabyte cache, 1 GB of RAM, 73 GB internal mirrored hard drives, CD-ROM and tape drives, and other miscellaneous equipment.
X-terminal Clients
The X-terminals are connected through the LAN to the system processor to run various Graphical User Interface (GUI) applications such as the MMI interface. Up to five X-terminals may be supported by an OMC-R configuration. The X-terminals are equipped with either a 17 inch or 21 inch color display. The X-terminals also contain 32 Mb of memory and 128 bytes of Non-Volatile Random Access Memory (NVRAM).
Software Description
This section contains information on the OMC-R software architecture. For more information, see the OMC-R System Overview (68P81001Y90).
10
NE Software Loads
The OMC-R is the repository for the NE operating software and the software for each NE is downloaded as configuration changes or enhancements are needed. Each new configuration file or load is made up of code sets. A code set contains the basic operational and/or software process software for the NE.
The functionality of the system processor includes the storage and download of network status, and the utilization of performance and configuration information. The system processor interacts with NEs and processes event information and performance statistics. The system processor administers system configuration information, user permissions, remote logins, and OMC-R resources such as printers, modems, and X-terminals.
The MMI application is available on an X-terminal and is used to operate and maintain the applications and databases of the OMC-R system processor. The MMI application includes:
SR 16.0 05-Oct-07
Reports and query management Event and alarm status definition and display Load and network configuration interface Spreadsheet package for the manipulation and reporting of statistics
Network Management
This section contains the following network management information:
10
Event/alarm management Performance management Configuration and load management Fault management Security management Redundancy
Event/Alarm Management
Events/alarms are generated at a specific NE and sent to the OMC-R on the X.25 network. The OMC-R collects and stores events and alarms from the NEs to monitor the iDEN network. Each event sent to the OMC-R includes the alarm condition and informational status reports. Alarm conditions are referred to as active events, or events that require immediate attention. Informational status reports are referred to as a passive events that do not need immediate attention, but that provide information regarding the status of a particular NE. Events and alarms are also monitored for the links that connect the NEs together.
Performance Management
The OMC-R collects performance-related data from all of the NEs that are managed by the OMC-R for network planning and optimization. This data provides the information needed to fine-tune the performance of the NEs in the iDEN network. Statistics are collected by each particular NE and periodically uploaded to a database on the OMC-R.
The NEs application software is updated with each new configuration file, or new software download. Each new configuration file changes and manages the NEs system software and hardware parameters. In addition, the technician can also reconfigure existing network elements and introduce new system features.
Fault Management
With the fault management function, the technician can determine the location of a fault by running diagnostics, place NEs in or out of service, and display NE status.
Security Management
The OMC-R functions are protected by security management, and changes are only performed by authorized personnel. Each authorized technician is given a unique user name and password to gain access to the system. Authorized technicians are assigned security classes that specify what commands are accessed and what output messages are displayed.
Redundancy
The following redundant module is available with the OMC-R: OMC-R system processor mirrored hard drives
SR 16.0 05-Oct-07
10
References
The following publications may be required to supplement the information contained in this chapter:
OMC-R System Overview (68P81001Y90) OMC-R Configuration Management Parameters (68P81001Y92) OMC-R Configuration Management Users Guide (68P81001Y91) OMC-R Event/Alarm Management (68P81001Y94) OMC-R Online Change Configuration Users Guide (68P81001Y97) OMC-R Performance Management (68P81001Y95) OMC-R System Administrator Guide (68P81001Y93)
These documents are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com unless otherwise stated.
NGO Overview
The Next Generation OMC-R (NGO) is available in the iDEN network and replaces the legacy OMC-R. Previously, up to seven OMC-Rs were utilized in an Urban. In the next generation architecture, up to three OMC-Rs are utilized in an Urban, one primary and up to three secondary. The Sun Netra 440 was chosen for the OMC-R platform as the architecture supports up to 10,000 sites. There was also a sizable gain in performance over the Enterprise E3500 and the Sun Fire 4800. The NGO merges the OMC-R system processor and the MMI processor into the same physical box on the Netra 440 platform. To complete the merge, the single box is divided into two Solaris zones, the global zone for the OMC-R system processor and a local zone for the MMI processor. The OMC-R provides the following functionality in the iDEN network:
10
Coordinates uploads/downloads to the NEs. Logs and displays events and site status display for the NEs. Coordinates system configuration and validation, IOS operation, and MDG fault tolerance for the OMC-Rs in the Urban. Administers user accounts at both the primary or secondary OMC-R.
A single OMC-R supports a maximum of up to 1000 sites. One primary OMC-R contains the MMI, and the secondary OMC-R supports up to 2000 sites. One primary OMC-R contains the MMI, and each of the two secondary OMC-Rs supports up to 2000 sites for a total of 4000 sites.
Hardware Description
The NGO consists of the following:
10
Sun Netra 440 Sun StorEdge 3510 Disk Array Sun Netra 240 backup server SunRay X-Terminal LAN switch External X.25 Hub Network Time Server (NTS)
The Netra 440 platform contains the following components (See Figure 10-5 and Figure 10-6 Netra 440-Rear View on page 10-15:
Solaris 10 Operating System (OS) Four 1.593 GHz processors 32 GB of RAM 4-Port Frame Relay (FR) Card supports 4 cables Four 146 GB mirrored internal SCSI disk drives Two Fibre Channel Cards to connect to external disk array Two 10/100/1000 Megabits per second (Mbps) Ethernet ports
SR 16.0 05-Oct-07
DVD Drive
E0
E1
Serial Port
DC Ground Studs
The Sun StorEdge 3510 external disk array is shared by all the OMC-Rs in the Urban and holds the Informix databases and ne_data partitions. Each OMCR is connected to the disk array by two fibre channel connections. The StorEdge 3510 holds up to 12 physical disk drives and utilizes RAID5 mirroring. The disk array is logically partitioned into six Logical Unit Numbers (LUNs), and each OMC-R is allocated 2 LUNs.
Previously, backups for the OMC-R system processor, MMI processor and Informix database were completed in separate components. Now the backup server on the Sun Netra 240 platform performs daily tape backups for the OMC-Rs in the Urban. A gigabit ethernet (1000 Mbps) has been added to support the tape backup activities. The Sun Netra 240 platform is connected to a tape jukebox by a SCSI interface. The Sun C4 StorEdge tape jukebox holds 36 HP StorageWorks Ultrium 2 tapes: 30 backup tapes, 5 clone tapes, and a cleaning tape.
SunRay X-terminal
The SunRay X-terminal displays the status for the MMI processor and system processor. The X-terminal is typically either a T-HMX, an Explora 700, or an NC900. Up to five X-terminals are supported by the OMC-R. The SunRay X-terminal contains the following:
A 17 or 21 inch color display 32 Mb of memory Set of boot PROMS 128 bytes of NVRAM IEEE 802.3 Ethernet connection (co-axial or twisted pair)
LAN Switch
The LAN switch is a Gigabit ethernet switch with 48 ports. This LAN switch supports the Gigabit speed requirement for the E0 LAN.
The external X.25 hub replaces the existing X.25 switch and NAS hub. The hub forwards X.25 traffic from the legacy BSCs to the OMC-R. One hub supports all X.25 traffic in an Urban.
NTS
The NTS has Global Positioning System (GPS) input and is used as a primary time source. The NTS server supports the Network Time Protocol (NTP) and the Simple Network Time Protocol (SNTP) protocol over IP network.
SR 16.0 05-Oct-07
Software Description
This section contains the following software information:
10
Load Management (LDM) Download Manager Download Queue Manager AirGen Informix
The LDM Download Manager controls download operations using a spreadsheet format. Elements selected for download are moved to a download queue and the download process immediately starts. Download rules are built into the package to minimize network disruption. Extensive warnings alert the technician about network impacts.
The Download Queue Manager works with the LDM Download Manager and is a process on the System Zone. The DQM controls and maintains the download queue order of NEs that are selected by the technician for download. The Download Queue Manager provides the following functionality:
Assigns a download order for each NE in the queue. Allows the technician to change the download order of a NE whose download has not yet started. Allows the technician time to apply a set of download/switch rules to minimize the downtime of NEs. Uses an Acknowledgement (ACK) for both foreground and background downloads. If a download fails, a total of 3 retry attempts are allowed. Gives precedence to Internetworking Operating System (IOS)/Nonversioned Data (NVD) by blocking versioned downloads that have reserved download slots. Gives the option to stop a OMC-R download if there is insufficient memory on the Integrated Site Controller (ISC2). Introduces conditions for a successful iDEN Vocoder Processing Unit Dispatch (iVPUd) or iDEN Vocoder Processing Unit Dispatch/ Interconnect (iVPUdi) download.
AirGen
AirGen is an embedded GUI application running on the OMC-R platform that allows the technician to configure and manage the iDEN NEs. Airgen provides the following functionality:
OMC-R DTE addresses are associated with shelf/card at the upgrade time. iDEN System Objects screen The OMC-R X25 object moved under Urban. Urban screen X.25 Line Type indicates the PTI hub uses either T1s or E1s. OMC-R X.25 Select screen Displays the X.25 shelf and card number. OMC-R X.25 screen Displays input fields for the X.25 shelf and card. BSC screen Displays input fields for the X.25 shelf, site address, BSC port number, and BSC time slot number. Move BSC screen Displays X.25 address, hub and card fields.
Informix
A shared disk array contains the Informix databases for all of the OMC-Rs in the urban. A single backup process captures the OMC-R system processor file systems, the MMI processor file systems, and the Informix database. At the beginning of the backup, the Informix data is backed up by performing a binary dump to a file on the system processor. The OMC-R builds a backup image that includes the Informix data and the information is sent to the backup server over the E0 LAN.
SR 16.0 05-Oct-07
Network Management
This section contains information on Network Views. The OMC-R has the same network management functionality as the legacy OMC-R. See Network Management on page 10-11 for a complete description of OMC-R related network management functions.
10
Network Views
Network Views is a graphical display that monitors the NEs in the iDEN network. The Network Views application displays network element status for multiple urbans or a single OMC-R. Each network element is identified by type, name and status. And different colors indicate the status of each NE. The Network Views application also supports filtering icons based on element status. Network Views provides the following functionality:
Combined root view is the default. Objects in the left panel are ordered and named differently. Quick Filtered View has been added. Text Only View has been added. Timer Based Refresh is available. Detailed Information display changes were incorporated.
10
References
The following publications may be required to supplement the information contained in this chapter:
These documents are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com unless otherwise stated.
EGT4 Overview
Enhanced Global Title Translation Table Tool (EGT4) provides seamless support for both 2G and 3G Urban areas simultaneously in the same Horizontal Network (HN). The EGT4 is configured by default to operate in a 3G mode supporting 2G and 3G Urbans. The EGT4 supplies complete interUrban configuration information to the primary OMC-R in each Urban Area. The local primary OMC-R configures the local NEs through the on-line change interface. A North-Bound Interface (NBI) between the OMC-R and the EGT4 allows the technician to add or modify the HN configuration information, push the data to the EGT4, and distribute the change to the other Urbans. The EGT4 provides the following functionality in the iDEN network:
10
Supports 2G and 3G Urbans in the same HN. Manages Global 3G Universal Fleet Member ID (UFMI) ranges. Utilizes the NBI to add or modify HN configuration information. Distributes changed information to the Urbans in the iDEN network. Monitors software loads between different Urbans.
The EGT4 also keeps configuration data synchronized between different Urbans in the urban distribution process. The non-versioned data from EGT4 is coordinated with the local Urban OMC-R through the urban distribution manager process. The urban distribution manager process alerts the local OMC-R that new, non-version data is ready for download to the local NEs. The urban distribution manager then coordinates the OMC-R processes to perform impact calculations, build, replication, and download of the information. Then the download success or failure is reported back to the EGT4.
SR 16.0 05-Oct-07
Hardware Description
The EGT4 operates on a UNIX workstation with the following minimum requirements:
10
A dual 450 MHz processor UltraSPARC series computer from Sun Micro Systems 512 MB RAM 36 GB hard disk Monitor (1024x768 resolution) Ethernet controller CD-ROM drive
Software Description
The sections includes the following EGT4-related software information:
10
The NBI object manager (nbiobjectmgr) is responsible for database-related activities and exchanges database tables with the EGT4. The NBI object managers retrieves GTT data from the EGT4, and monitors EGT4 table iterators to sequentially access NE information.
The NBI access manager (nbiaccessmgr) is responsible for EGT4 login/ logout information. The NBI access manager performing EGT4 client authentication (MD5 encryption), allows the EGT4 to change the EGT4 account password, and keeps track of EGT4 client login sessions.
The NBI notification manager (nbinotificationmgr) serves as an event channel for EGT4, NBI, and NEs that may send or receive events asynchronously. The NBI notification manager allows clients to subscribe to certain events, dispatches the subscribed events to the subscribers when needed, and provides event filtering.
Network Management
The EGT4 is a three-tier web application that consists of the following components:
10
Client web browser Web server Back-end Light Weight Directory Access Protocol (LDAP) data store COBRA name service
The client web browser serves as the EGT4 interface for managing the system data. The web browser is installed on the client machine (if the client machine is different than the server). The client machine must have IP connectivity to the machine hosting the EGT4 application. EGT4 currently supports Netscape v6.2.x in a Solaris or Windows environment. Internet Explorer is not supported.
Web Server
The web server is the main processing engine of the EGT4 application. The web server processes requests received from one or more connected web browser clients and performs data storage and NBI operations on their behalf. The web server must run co-resident with the LDAP directory server.
The LDAP directory server is the persistent data store utilized by EGT4 to track the current HN network configuration data. The LDAP server must run co-resident with the web server.
To support NBI connectivity to one or more OMC-Rs, the JacORB CORBA name service must run on the EGT4 server host machine. The name server allows various OMC-Rs to register their network location so that the EGT4 may locate them for connection purposes.
SR 16.0 05-Oct-07
10
References
The following publications may be required to supplement the information contained in this chapter.
MPS and CWM Health Checks (HC05002) MPS Technical Reference Guide (68P80802A95)
These documents are available on the Motorola Extranet web site at http://mynetworksupport.motorola.com unless otherwise stated.
10
Monitors and configures the Broadband Packet Exchange (BPX)/Internet Gateway Exchange (IGX)/Multi-Gigabyte Exchange (MGX). Provides connection generation to various MPS elements. Collects statistics on the health of the network.
Hardware Description
This section contains the following CWM-related hardware information:
10
Processor Switching Module (PXM) Service Expansion Shelf (SES) Private Network-Network Interface (PNNI) controller
PXM
The PXM is the central processor for the SES PNNI controller and provides Asynchronous Transfer Mode (ATM) switched virtual circuits routing and signaling for Cisco WAN switching networks. The PXM runs the system software and maintains the trunk to the BPX.
The SES PNNI Controller uses the combined network management system of Cisco WAN Manager to configure and monitor the SES PNNI node.
Software Description
The CWN downloads firmware to various MPS boxes, and saves the MPS configurations.
10
Network Management
CWM is used for network and connection management functions such as topology, traps display, network browser, configuration/restore, image download for the AutoRoute/PNNI network. The PXM is responsible for network management and communicates with the Cisco WAN Manager station.
10
SR 16.0 05-Oct-07
Chapter 11
Other Network Elements
In This Chapter Topic See Page
ASP EM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-16 NDM PWI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-16 NDM BU Admin Web Interface. . . . . . . . . . . . . . . . . . . 11-16
SR 16.0 05-Oct-07
Purpose
This chapter describes other Network Elements (NEs) that comprise the iDEN network. Each NE includes information on the key hardware components, software function, and network management facilities. The following NEs are included in this chapter:
11
11
iGW Overview
The iGW performs signaling and media interworking for private Dispatch calls, as well as Selective Dispatch Group Call (SDGC) calls between a 3G network and the iDEN network. The 3G network features high-speed access, multi-media and data services over Internet Protocol (IP), and call control signaling using the Session Initiation Protocol (SIP) and the Session Description Protocol (SDP). The SIP creates a call relationship for multimedia calls such as voice or Video over IP (VoIP). And SDP sets up all the call details, including the type of encoding selected for the call. In the iDEN network, voice packets are transcoded using Vector Sum Excited Linear Prediction (VSELP) or Advanced Multi-Band Excitation Enhanced plus Two (AMBE++). Whereas, a 3G network may use a number of encoding algorithms. The iGW converts the voice packets from the encoding standard used in the 3G network to VSELP or AMBE in the iDEN network. The iGW also functions as a Push-to-Talk over Cellular (PoC) server to the SIP domain for the 3G network. The iGW provides the following functionality in the iDEN network:
11
Converts voice packets from 3G encoding to VSELP or AMBE. Supports SDGC calls. Connects the Dispatch subsystem with 3G SIP domains. Provides pseudo HA-DAP, iHLR, and iDAC functionality to the iDEN Horizontal Network (HN). Connects to the Operations and Maintenance Center-Radio (OMC-R) for operations and maintenance network communication.
The iGW internal network uses the Open Shortest Path First (OSPF) routing protocol for all internal iGW IP Layer 3 routing. IP routing in the iGW consists of configuring the iGTN routers, and configuring the individual NE (VSGW or 3G-XCDR). Traffic types are divided into two different types, the control plane and bearer plane. The control plane traffic is involved with the signaling and network management of the system, and the bearer plane traffic is involved with Real Time Transport Protocol (RTP) data streams carrying voice traffic.
SR 16.0 05-Oct-07
The iGW Transport Network (iGTN) provides IP data transport to all NEs in the gateway (see Figure 11-1 iGW in the iDEN Network on page 11-5). The internal network depends on the configuration of these routers as they serve as the Area Border Routers (ABRs) to the customer IP network. The interface between the iGW and the customer network is defined at the exterior interfaces of the iGTNs. Since iGTNs are the boundary between the local OSPF autonomous system and the external networks, the iGTNs are identified as ASBR. Both iGTNs are also identified as the Designated Router (DR) and Backup Designated Router (BDR) for internal OSPF associations. Similarly, the VSGW and 3G-XCDR Zynx IPSB router boards perform an area border function and are classified as ABR. The Zynx IPSB router boards are also the DR and BDR for the internal chassis OSPF areas. Two Virtual Local Area Networks (VLANs) are required for the iGW internal networks. The iGTN Virtual Local Area Network 2 (VLAN2) is used for inter-iGTN communication in OSPF Area 0. VLAN2 serves as the native VLAN for inter-switch links, and provides a layer 2 broadcast domain for any locally attached maintenance terminals. Virtual Local Area Network 3 (VLAN3) provides inter-chassis communication to support OSPF Area 1 of the VSGW bearer plane traffic. VLAN3 is used for inter-switch transport of the traffic associated with OSPF area 1 bearer. Figure 11-1 iGW in the iDEN Network
Hardware Description
The iGW is comprised of the following components:
11
iGW Transport Network (iGTN) 3G Transcoder (3G-XCDR) Voice and Signaling Gateway (VSGW)
iGTN
The iGTN is deployed in four stand-alone cabinets and provides all of the IP transport for the iGW. The iGW is the network interface reference point for all external connectivity to the iDEN network. The iGTN cabinet is deployed in a duplex configuration and both routers are in active/active mode and are viewed as a single unit to the iDEN network. The interface between the iGW and customer networks are defined at the exterior interfaces of the iGTNs. And each iGW NE relies on the iGTN routers for network connectivity.
3G-XCDR
The 3G-XCDR provides voice transcoding between iDEN VSELP and AMBE++, and the 3G network. The 3G-XCDR cabinet contains N+1 redundant 3G-XCDR NEs. The 3G-XCDR Zynx Internet Protocol Switch Board (IPSB) routers perform an area border function, and are classified as ABR. The Zynx IPSB router boards are also the DR and BDR for the internal chassis OSPF areas.
VSGW
The VSGW provides interworking between the 3G SIP network and the iDEN network. The VSGW may act as a pseudo-HA-iHLR by providing the hop route to the 3G network. When a private Dispatch call is initiated, the setup flows create a signaling path and a bearer path. The bearer path carries the VoIP packets between the SIP and iDEN domains. The VSGW converts the standard RTP to the iDEN RTP (iRTP), and also checks RTP packet integrity. The VSGW transfers call detail records to the billing server using FTP. The VSGW Zynx IPSB router boards perform an area border function and are classified as ABR. The Zynx IPSB router boards are also the DR and BDR for the internal chassis OSPF areas.
SR 16.0 05-Oct-07
Software Description
This section contains the following software related information:
11
iGW Feature
The iGW-SDGC Enhancement feature impacts the iGW: iGW-SDGC Enhancement Previously, the iGW supported private Dispatch calls between two mobile stations. Now, the iGW supports SDGCs between a 3G network and the iDEN network. When an SDGC call is initiated, the 3G network forwards an SIP invite request. The VSGW sends a Session Progress response and forwards a Proxy Page Request Forward message to the HA-DAP. Next, the VSGW reserves transcoder resources to convert 3G voice to VSELP or AMBE++, and to convert RTP to the iRTP. When the iDEN subscriber unit (SU) responds to the page, the HA-DAP sends an SDGC Page Response Forward message to the VSGW. The VSGW sends an OK response to the 3G network endpoint signaling that the Dispatch call is connected. The following changes were made for SDGC calling on the Logical VSGW
Configuration screen:
VSGW Private Call Billing Enable, and VSGW SDGC Billing Enable parameters were added to the NOC Billing Server Information for 3G pane.
The VSGW Billing Enabled parameter was renamed to VSGW Overall Billing Enable and the functionality has changed.
Default SDGC Call Priority and UFMI Cache Update Enable lines were added to the Subscriber Provisioning Default Information pane. A new Selective Dynamic Group Call Timers pane and associated parameters were also added.
The Logical VSGW SDGC Timers pane was added to the iGW Expert screen. And on the 3G-XCDR screen, the External Domain-XCDR Interface Information pane was also added.
VSGW Billing
Billing files are created in the /billing/active directory on one of the two Payload Home and Billing (PHaB) in each VSGW chassis. The PHaB contains both the Pseudo iDEN Home Location Registers (piHLRs) and the billing application. When the billing files are closed by the billing application, they are placed in the billing/unsent directory. The billing files are then moved to the billing/sent directory after the files are sent to the Network Operations Center (NOC).
Time Synchronization
The Network Time Protocol (NTP) is used by the iGW NEs for time synchronization. The iGTN platform is an NTP client, and in some cases may be configured as an NTP server. All SNMP traps and syslog events in the iGW use the Universal Coordinated Time (UTC) time zone for all system timestamps.
SR 16.0 05-Oct-07
Network Management
This section provides the following network management information:
11
3G-XCDR Shelf Application Manager (SAM) OMC-R network interface iGTN SNMP configuration Enhanced Global Title Translations Table Tool (EGT4) Command Line Interface (CLI)
3G-XCDR SAM
The 3G-XCDR SAM registers with any VSGW that is provisioned in the iGW. The 3G-XCDR SAM tells each VSGW what channel element (pairs of vocoders) types are acceptable. When the registration is complete, the 3GXCDR accepts call setup requests from the VSGW.
The OMC-R provides network management functionality for the iGW NEs with the exception of the iGTN. The OMC-R also provides the communication interface for EGT4 and supports remote management through the use of SNMP version 2. All of the iGW NEs use SNMP for remote configuration management, alarms, and trap reporting. The VSGW and 3G-XCDR are configured as SNMP clients and report all alarms and traps to the OMC-R.
The iGTN is not managed by the OMC-R. The iGTN routers are configured to enable remote management through the SNMP. The configuration of SNMP on the iGTN routers includes identifying the community strings for both Read-Only (RO) and Read-Write (RW) functions, identifying the OMC-R IP address, and identifying the source address for all SNMP traps and alarms. The iGTN supports the use of Access Control Lists (ACL) to control which system in the customer IP network has the ability to access the SNMP functions of the routers.
EGT4
EGT4 assists in the management and translation of network addresses for Integrated Services Digital Network (ISDN) IP and Universal Fleet Member ID (UFMI).
CLI
A CLI interface is available to perform operation and maintenance activities for the iGW NEs. The CLI is only used in initial system startup, off-line troubleshooting, and remote telnet management for configuration.
11
NDM Overview
NDM provides a cost-effective, commercial IP-solution offering NetDispatch services to the iDEN customer through the web-based Dispatch MessengerTM (DM). NetDispatch adds messaging capabilities between IP Clients and subscriber units. The NDM functionality is available in fixed locations, in areas where handsets are prohibited, and in areas of poor or no Radio Frequency (RF) coverage. The NDM provides the following functionality in the iDEN network (see Figure 11-3):
11
Offers Dispatch Messenger for NetDispatch services. Supports legacy and extended Dispatch service. Allows the service provider to configure the northbound T1 network in Split, or All Frame Relay (AFR) backhaul mode. Supports third-party development of the provisioning client interface.
The NDM is accessed through the DM application. The DM application is accessible on a Personal Computer (PC) and connects the user to the ASP over an IP network. The IP network is either ethernet, wireless LAN, or a dialup modem connections. PC users who are connected to the service providers IP network use the DM to communicate with other PC users on the service providers IP, and also mobile subscribers who are active in the iDEN network.
SR 16.0 05-Oct-07
Hardware Description
The NDM is comprised of the following components:
11
All NDM components utilize the TS230 server, which is a dual processor 2 Rack Unit (RU) server platform (see Figure 11-4 NDM Cabinet - Front and Rear on page 11-13). The NDM system is installed with two ASPs and two PTx-AS for redundancy and availability in the iDEN network. The NDM servers communicate to Dispatch Messenger users over an IP network, and to the iDEN network over a T1/E1 interface. The NDM components receives power from the -48 V DC power system.
ASP
The ASP terminates iDEN connectivity and handles NDM call processing. The ASP appears to the Mobile Switching Office (MSO) as a pseudo-EBTS, and each ASP has a minimum of 1 pseudo-EBTS site. Each ASP node in the NDM system is connected with one to four T1s. The ASP nodes are deployed in an active/active configuration and are connected to the IP switch and the Rollobox using serial console cables for remote debugging. Each node independently supports the required maximum (80K) subscriber connection capacity, and the required maximum (200 calls per T1; 800 calls/node) simultaneous call load.
Dual 2.4 Gigahertz (GHz) processors 2 Giga-Byte (GB) memory, 146 GB hard disk Peripheral Channel Interconnect (PCI) Slot 1 security accelerator card PCI Slot 2 T1 card PCI Slot 3 ethernet card
PTx-AS
The PTx-AS is responsible for deploying the NDM application which includes user authentication and a Web browser. The PTx-AS servers are colocated with the ASP servers and are typically located at the iDEN MSO. The PTx-AS is comprised of the following components:
Dual 2.4 GHz processors 2 GB memory, 146 GB hard disk PCI SlotsSlot 1 is the security accelerator card, Slot 2 is empty, and Slot 3 is the ethernet card 146 GB ultra Small Computer System Interface (SCSI) disk drive
SR 16.0 05-Oct-07
Front
Rear
Software Description
This section includes the following software related information:
11
Billing
The NDM generates billing files that are unique from the HA-DAP. Each billing file contains a record of completed DM calls that are processed by the ASP. The call data stored in the billing file is used by the administrator to create billing records for utilization charges and perform statistical analysis on call profiles.
Statistics
Statistics for private call, group call, call alert, dispatch statistics, and alarm statistics are collected and assigned to a unique file. At the start of each day of the week, a statistics file is generated for that particular day. Statistics that are generated in the last 7 days are viewable on the ASP Element Manager (EM).
Alarm and events are available in real time from the Welcome page in the ASP Element Manager.
Network Management
This section contains the following network management information:
11
DM
DM is the Windows-based software application that enables Dispatch services and messaging capabilities for Dispatch users on an IP. The following features are available in the DM application:
Instant Message Text to Speech Dispatch Voice Dispatch Voicemail Presence and Availability Dispatch Voice Notes
SR 16.0 05-Oct-07
To initiate a call, the PC user selects the Private or Group Name/ID from the DM window and speaks into the microphone. Call setup information is sent over the PC users IP network or dial-up connection to the ASP, and is forwarded to the HA-DAP where it is processed. Calls that are initiated by PC users through the ASP are transparent to the iDEN network. These calls are processed as if they were initiated through an EBTS. A PC user and a SU can initiate and receive the following Dispatch calls: Private Call, Call Alert, and Group Dispatch Call, User Defined Group Call (UDGC), and Emergency Group Call. A call can be initiated from one PC user to another PC user, from a PC user to a SU, and from a SU to a PC user. The call types available through the DM application depend on service provider network configuration and policies. Some service providers may not support NDM call types such as User Defined Group Call or Emergency Group Call. The NDM system also allows the iDEN technician to group NDM users into business units. Each Business Unit has a Business Unit Manager, or BU Admin assigned to manage users in that Business Unit. BU Admin accesses the ASP data store from a public IP through the PTx AS. BU Admin is provided with an account and a ID/password/e-mail address.
The Instant Message Text to Speech Dispatch Voice feature allows PC users to send text-to-speech messages to other PC users and/or mobile subscribers, registered in the iDEN network. The Instant Message Text to Speech Dispatch Voice feature is initiated only by a PC user, but may be received by a PC user or a SU. A SU cannot initiate the Instant Message Text to Speech Dispatch Voice feature since this service requires the use of the DM and a PC. Only PC users can access the Dispatch Voicemail feature. SU and PC users can leave a message with the PC users Dispatch Voicemail.
Dispatch Voicemail
The Dispatch Voicemail feature allows PC users to receive voicemail for Private Calls and retrieve the message at a later time.
NDM provides real-time presence and availability information to all PC users. The ASP can detect if PC users are online (logged in) or offline (logged out). The presence and availability feature is only available for PC users. This feature is not available to a SU because this service requires the use of the DM and a PC.
The Dispatch Voice Notes (DVN) feature allows a PC user to send prerecorded audio messages to another PC user, or a SU in the iDEN network.
ASP EM
The ASP EM is activated on each ASP node during installation, and provides configuration and fault and performance management for that node. The ASP EM interface is accessible through the service providers IP network.
11
NDM PWI
The NDM PWI is used to provision the ASP. Provisioning includes assigning user identity information for authentication, and service related access. Every DM user is provisioned in the HA-iHLR and then the ASP for authentication. The NDM PWI is accessible through the ASP EM.
11
11
SR 16.0 05-Oct-07
Index
Symbols Numerics
5K Sites 4-3
A
Access Control Gateway (ACG) 6-13 ACG (Access Control Gateway 6-13 Advanced Services Platform. See ASP AirGen 2-6, 10-18 Alarm Management Controller card 9-16 Alarm on LMT login/logout 8-28, 8-39, 8-49 Alarms for Packet Data 9-29 All Frame Relay Interconnect subsystem 7-8 Packet Data subsystem 9-5 Radio Access Network subsystem 6-7 AMC card 9-16 APD capacity 8-58 front view 8-60 functionality 8-58 hardware description 8-59 HSSI I/O board 8-59 LED board 8-60 MMI 8-61 MTX board 8-59 network management 8-61 OMC-R interface 8-61 overview 8-58 power supply 8-59 software description 8-61 ASP 11-13- 11-15 element manager 11-16 audio routing 8-34
Act) XCIPIO 5-6 Call and Data Services for Subscribers and Users 2-3 Call Intercept Provisioning Center 7-39 call monitoring/surveillance 2-4 cell 2-9 Cisco WAN Manager functionality 10-23 hardware description 10-23 network management 10-24 overview 10-23 PXM 10-23- 10-24 SES PNNI controller 10-24 software description 10-24 Cisco WAN Manager, See CWM Communications Assistance for law Enforcement Act, See CALEA Configuration Management 2-6 Core Processor 7-28 CPU board 9-26
D
DAP account and performance management 8-18 availability management 8-19 BCCB 1008 8-17 BCCB 1020 8-17 BCCB 862 8-16 call processing management 8-19 capacity 8-10 CCP layer 8-21 common agent 8-20 components 8-12 configuration and state management 8-19 core architectural models 8-18 CPU 8-13 database management 8-18 enhancements 8-16 event logs 8-20 fleet ID 8-11 fleet member ID 8-11 functionality 8-10 hardware description 8-12 iDEN menu 8-19 IOC 8-13 MAP 8-21 MSD 8-13 multiple simultaneous talk group ID 8-12 network capacity 8-10 network management 8-19 overview 8-10 R10K 8-12 R10K cabinet 8-14 R12K 2-way 8-12 R12K 4-way 8-12 R12K cabinet 8-14 resource management 8-18 ROC 8-13 site specific parameters 8-12 SMT menu 8-20 SSC 8-13 subscriber parameters 8-10 system control management 8-18 talk group ID 8-11
B
BA 9-24 back panel 9-27 front view 9-26 hardware description 9-25 I/O Communications Transition board 9-27 mirror function 9-25 network management 9-29 OMC-R 9-29 software description 9-28 system monitor board 9-26 overview 9-24 Base Radio 6-9 in EBTS system 6-18 BCCB 1008 8-17 1020 8-17, 8-29 862 8-16 Billing Accumulator, See BA billing in the system 2-5 Broadband Packet eXchange, See BPX BU admin web interface 11-16 Business Change Control Channel. See BCCB
C
CALEA (Communications Assistance for Law Enforcement
Functional Description i
Index
urban ID 8-11 Data 1-6 Data Files 9-28 Database query 8-31 Database replication 8-31 DCS 6-8 Delivery Features for Service Providers 2-4 description 5-8 Digital Cross Connect Switch, See DCS Digital Trunk Controller, See DTC Dispatch 1-9 Dispatch Application Processor. See DAP Dispatch call processing 8-11 Dispatch Messenger. See DM 11-14 Dispatch voice notes 11-15 Dispatch voicemail 11-15 DM 11-14 initiate a call 11-15 documents, iDEN protocol 5-17 Domain 2-7 Fleet member ID 8-11 Frame Relay 8-7, 10-14 Frame Relay to Internet Protocol, See FRIP FRIP 8-34, 8-36 FRUs (Field Replaceable Units) 9-11
G
Global 2-7 Global Positioning System Denial Response 4-5 Global Positioning System, See GPS
H
HA-DAP alarm on LMT login/logout 8-28 BCCB 1020 8-29 capacity 8-23 database maintenance 8-31 database query 8-31 database replication 8-31 ELMT 8-33 feature 8-26 functionality 8-23 HA services layer 8-32 hardware description 8-24 iGW-SDGC enhancement feature 8-26 LMT 8-32 LMT master iVPU 8-29 network management 8-32 NGD enhancements 8-28 OSPF 8-32 overview 8-23 remote access security 8-28 root access control 8-29 RX7620 server 8-25 software description 8-26 HA-iHLR alarm on LMT login/logout 8-49 alarm panel 8-46 backup and restore CLI 8-51 cabinet 8-47 capacity 8-45 data collection time interval 8-49 database ASCII dump 8-50 debugging data collection tool 8-49 disk arrays 8-46 ELMT 8-51 ethernet physical port status 8-48 features 8-48 functionality 8-45 hardware description 8-46 health check 8-48 LEDs 8-46 LMT 8-50 LMT master iVPU 8-50 load shedding status 8-48 network management 8-50 NGD enhancements 8-49 OCP 8-46 overview 8-45 ping and TraceRoute 8-48 provisioning transaction log file 8-52 restricted access key 8-49 RMC 8-51 root access control 8-49
E
EBTS brief description 6-8 cabinets 6-11 event logs 6-20 functional description 6-10 hardware description 6-10 network management 6-19 OMC-R 6-19 overview 6-10 software description 6-19 EBTS Frame Relay to Internet Protocol, See eFRIP EBTS Re-rack with Dual Band RFDS 4-4 eFRIP 8-36, 8-38 EGT4 client web browser 10-22 CORBA name service 10-22 functionality 10-20 hardware description 10-21 LDAP directory server 10-22 NBI access manager 10-21 NBI connectivity 10-22 NBI notification manager 10-21 NBI object manager 10-21 network management 10-22 overview 10-20 software description 10-21 web server 10-22 ELMT 8-33, 8-51 Emergency Local Maintenance Terminal. See ELMT Encryption Support for Dispatch 2-5 Enhanced Base Transceiver System See EBTS enhanced frame relay channel statistics 8-40 Enhanced Global Title Translation Table Tool, See EGT4 enhanced LMT capabilities 8-40 Enhanced Network, See ENET Environmental Alarm System, See EAS
F
Feature Matrix, SR16.0 4-7 Fiber Link Peripheral Processor, See FLPP Field Replaceable Units (FRUs) 9-11 Fleet ID 8-11
SR 16.0 05-Oct-07
Index
software description 8-48 SRM 8-51 TS20 server 8-46 TS40 AlphaServer 8-46 Highly Available-Dispatch Application Processor, See HADAP Highly Available-iDEN Home Location Register, See HAiHLR HN 8-36, 8-38, 10-20, 10-22, 11-4 Home Agent, See HA Home Location Register (HLR) 7-29 Horizontal Network, See HN Horizontal Networking 2-5 HSSI I/O board 8-59 system technologies 1-5 iDEN Base Site Controller, See iBSC iDEN Call Processor, See iCP iDEN Database ASCII Dump, See iDBAD 8-50 iDEN Dispatch Access Controller, See iDAC iDEN Dispatch Access Controller. See iDAC iDEN Gateway Dispatch Roaming Registration 4-2 iDEN Gateway, See iGW iDEN Gateway. See iGW 11-4 iDEN Monitor Unit and Environmental Alarm System, See emu and EAS iDEN Monitor Unit, See iMU iDEN Surveillance Gateway, See iSG iDEN Vocoder Processing Unit for Dispatch and Interconnect, See iVPUdi iDEN Vocoder Processing Unit for Dispatch, See iVPUd iDEN Vocoder Processing Unit. See iVPU iGW 3G-XCDR 11-6 3G-XCDR SAM 11-9 cabinet 11-7 call flow 3-38 CLI 11-9 EGT4 11-9 feature 11-8 functionality 11-4 hardware description 11-6 iGTN 11-6 iGTN SNMP configuration 11-9 iGW-SDGC enhancement 11-8 network communication 11-5 network management 11-9 OMC-R network interface 11-9 overview 11-4 SDGC 11-4, 11-8 session description protocol 3-38 session initiation protocol 3-38 software description 11-7 time synchronization 11-8 VSGW 11-6 iGW-SDGC enhancement feature 8-26 iMU and EAS 6-16 initial registration 2-4 initial SU system contact 2-4 Instant Message Text to Speech Dispatch Voice 11-15 Interactive Voice Response device, See iVR Interconnect subsystem description in iDEN 1-8 Facilities Interface 7-29 I/O Blade cards 7-16 IWF 7-33 Mixed Mode configuration 7-7 MSC 7-27 Next Generation Dispatch (NGD) configurations 7-5 overview 7-5 purpose 7-4 subsystem description 7-5 Interfaces for Customer-provided Administrative Data platforms 2-5 Internet Service Provider, See ISP Interworking Function, See IWF Intregrated GigaBit Exchange, See iGX IOS Import / Export Tool for Airgen 2-6 IP Footprint Reduction 7-21
I
I/O Communications Transition board (in BA) 9-27 iBSC architecture 7-11 coverage 7-11 functional description 7-9 hardware 7-13 hardware description 7-12 in the iDEN network 7-10 links and interfaces 7-21 links and interfaces description 7-22 network management 7-23 OMC-R 7-24 overview 7-9 software description 7-22 iCP board functions 7-16 boards 7-16 components 7-13 configurations 7-23 firmware downloads 7-24 frame layout 7-14 function 7-12 I/O Blade cards 7-16 Input/Output shelf 7-15 iDAC alarm panel 8-54 capacity 8-53 CPU board 8-54 functionality 8-53 GTT-IIC ASCII 8-57 hardware description 8-54 HSSI interface module 8-54 LEDs 8-54 LMT 8-57 module 8-54 network management 8-57 OMC-R network interface 8-57 OMC-R software interface 8-57 overview 8-53 software description 8-57 iDBAD 8-50 iDEN infrastructure for the MSO 1-8 network hierarchical structure 2-9 network protocols 5-11 protocol documents 5-17 system introduction 1-3 system organization 2-7
Index
iSC2 description 6-14 front and rear Views 6-15 iSC3 description 6-13 front panel 6-13 iSG 3-36 alarm panel 8-64 cabinet 8-65 cooling fans 8-64 CPU 8-63 DC power distribution panel 8-64 functionality 8-62 hardware description 8-63 HSC board 8-64 HSSI adapter board 8-64 LMT 8-67 network management 8-66 overview 8-62 software description 8-66 SPA 8-66 iVPU board functions 7-20 cabinet 8-37 components 7-18 components description 7-17 enhanced frame relay channel statistics 8-40 enhanced LMT capabilities 8-40 enhancements 8-39 feature 8-39 front cage 7-19 front cage boards 7-19 function 7-12 functionality 8-34 HSRP protocol 8-43 iVPUd overview 8-34 iVPUdi overview 8-35 LMT master iVPU 8-41 LMT user accounts 7-25 MMI command interface 8-42 overview 8-34 rear cage 7-20 remotely manage iVPU 8-41 Rolling Software Upgrade 7-22 rolling software upgrade 8-42 root access control 8-40 VPU SPROC/ISB LED indicators 8-40 VRRP protocol 8-43 VRRP support on iVPU 8-41 iVPUd 8-34 final NGD configuration 8-8 FRIP 8-36 hardware description 8-36 horizontal function 8-36 in Mixed Mode 8-7 iVPUdi 8-7, 8-35 FRIP 8-38 hardware description 8-37 horizontal function 8-38 in Mixed Mode 8-8 IWF 7-33 back view 7-35 front view 7-35 hardware description 7-34 network management 7-36 overview 7-33 services 7-34 software description 7-36
L
LDAP 10-22 LED board 8-60 legacy BSC functional description 7-26 Lifetime Audit Registration Renewal Required 4-3 Light Weight Directory Access Protocol, See LDAP LMT 8-32, 8-40, 8-49- 8-50 enhanced capabilities 8-40 functionality 8-32 master iVPU 8-41 LMT Master iVPU 8-29 Load Shedding Packet Data subsystem 9-23 Radio Access Network subsystem 6-20 Local Maintenance Terminal. See LMT Location Area 2-8 Lynx 8-32, 8-51
M
Main Serial Port Transition board 9-28 Mass Storage Device, See MSD MDG2 9-6 boards 9-8 Encryption Compression Processor boards 9-9 event logs 9-13 front view 9-8, 9-10 hardware description 9-7 MDGio board 9-9 MDGp board 9-9 MMI commands 9-13 network management 9-11 OMC-R 9-12 overview 9-6 RAG downloads 9-13 software description 9-11 MDG4 9-14 board layout 9-18 Cluster Management/Fault Recovery 9-22 event logs 9-22 front view 9-19 hardware description 9-16 in the iDEN Network 9-15 MDGc card 9-17 MDGd card 9-17 MDGv card 9-17 MMI commands 9-21 network management 9-20 OMC-R 9-20 overview 9-14 RAG Downloads 9-22 software description 9-19 system management software 9-19 User Interface Menu 9-21 mFRIP 8-36, 8-38 Mixed Mode 8-7 MLC functional description 7-31
SR 16.0 05-Oct-07
Index
hardware description 7-31 network management 7-32 software description 7-31 MMI 8-61 Mobile 7-26 Mobile Data Gateway 2, See MDG2 Mobile Data Gateway 4, See MDG4 MPS BPX 8-69, 8-73 BPX functionality 8-68 cabinet 8-71 Cisco WAN Manager 8-73 CLI 8-74 functionality 8-68 hardware description 8-69 links 10-5 MGX 8820 8-70, 8-72- 8-73 MGX 8850 8-70, 8-72, 8-74 MGX functionality 8-69 network management 8-73 overview 8-68 redundancy 8-73 SES 8-74 SES node architecture 8-72 SES PNNI 8-70 software description 8-71 MPS Frame Relay to Internet Protocol, See mFRIP MSC components 7-28 functionality 7-26 hardware description 7-28 in the Interconnect subsystem 7-27 monitoring center 7-30 network management 7-30 Signal Interface 7-29 software description 7-30 VLR MSD 8-13 MTX board 8-59 Multi Gigabyte Exchange, See MGX Multiple simultaneous talk group ID 8-12 protocols, iDEN 5-11 Network Dispatch Messenger, See NDM new features Network Management Features for Network Operators 2-6 Next Generation Dispatch. See NGD NGD configuration 6-6 features 7-6 final implementation 8-8 Mixed Mode 8-7 Split Mode 8-7 NGD System for Interconnect 7-6 for Packet Data 9-5 NGO network infrastructure 1-7 network management 1-10
O
OCP 8-46 OLCC Display and Logging 7-25 OMC Password Policy Enhancements 4-3 OMC-R BA 9-29 configuration and load management 10-11 EBTS 6-19 Enterprise 3500 10-7, 10-9 event/alarm management 10-11 fault management 10-12 functionality 10-4 hardware description 10-5 iBSC 7-24 Informix 10-10 MDG2 9-12 MDG4 9-20 MMI processor 10-9 MMI processor application 10-10 NE software loads 10-10 Netra 20 10-9- 10-10 Netra 240 10-16 network management 10-11 OML 10-5 overview 10-4 performance management 10-11 redundancy 10-12 SDM/FT 7-39 security management 10-12 short description 10-10 SNMP 10-5 Sun Fire 4800 10-5- 10-7 system processor 10-5 system processor applications 10-10 typical configuration 10-4 Ultra 60 10-9- 10-10 X-terminal clients 10-10 OMC-R (Next Generation) AirGen 10-18 capacity 10-14 download queue manager 10-17 external X.25 hub 10-16 functionality 10-13 hardware description 10-14 Informix 10-18 LAN switch 10-16
N
NDM alarms and events 11-14 ASP 11-11 billing 11-14 BU admin web interface 11-16 cabinet 11-13 dispatch voice notes 11-15 dispatch voicemail 11-15 DM 11-14 functionality 11-10 hardware description 11-11 instant message text to speech dispatch voice 11-15 network connections 11-11 network management 11-14 overview 11-10 presence and availability 11-15 PTx-AS 11-11- 11-12 PWI 11-16 software description 11-14 statistics 11-14 network
Functional Description v
Index
LDM download manager 10-17 Netra 240 backup server 10-16 Netra 440 10-13- 10-15 network management 10-19 Network Views 10-19 NTS 10-16 overview 10-13 software description 10-17 StorEdge 3510 disk array 10-16 SunRay X-terminal 10-16 OMC-R (Operations and Maintenance Center) 5-8 OML 10-5 On-Line Configuration Change 2-6 Online Configuration Change 7-25 Open Shortest Path First, See OSPF Operations and Maintenance Center, See OMC-R Operations and Maintenance Links, See OML Operator Control Panel, See OCP OSPF 8-32 Over-the-Air-Programming 2-4 Mixed Mode configuration 6-6 MMI commands 6-19 NGD configurations 6-5 NGD Mixed Mode configuration 6-7 overview 6-4 purpose 6-3 radio link functionality 6-8 status indicators 6-19 Radio Frequency Cabinet 6-12 Radio Frequency Distribution System (RFDS) 6-9 Radio Protocol 7-34 Region 2-7 Remote access security 8-28 Remote Management Console, See RMC Remotely manage iVPU 8-41 Retuning Management Application (RMA) 4-4 RF Distribution System, See RFDS RFDS 6-16 Cavity Combining RFDS 6-17 Duplex RFDS 6-17 RFDS (Radio Frequency Distribution System) 6-9 RMC 8-45, 8-51 Rolling software upgrade 8-42 Root access control 8-29 routing of voice 8-53 RSL Configurations 7-17 RX7620 server 8-25 back panel 8-26 front panel 8-25
P
Packet Data 1-9 overview 9-4 Packet Data subsystem description 9-4 Ethernet board 9-26 Ethernet Port Connector board 9-27 indicators and controls 9-12 Input Power Switch 9-12 Input Power Transition board 9-28 IP Switch Board cards 9-16 NGD configurations 9-5 purpose 9-3 Packet Data subsystem indicators 9-12 Packet Data, See PD PCI Mezzanine card 9-9 PDR Warning 7-25 Performance Management 2-6 police monitoring 5-6 Power Converter Module 9-26 Power supply 8-59 Power Supply Modules 9-10 Presence and availability 11-15 Private Call on 900 MHz (PC) 4-2 Processor Switching Module, See PXM protocol documents 5-17 iDEN network 5-11 Protocols 3-17, 3-36 Provisioning Web Interface. See PWI PTx-AS 11-11- 11-12 Push-to-Talk Application Server. See PTx-AS PWI 11-16 PXM 10-23
S
SCSI hard drive 9-27 SDGC 8-16, 11-4, 11-8 configuration changes 11-8 SDM//FT MMI commands 7-39 SDM/FT functional description 7-37 hardware description 7-38 monitored events 7-38 network management 7-39 OMC-R 7-39 overview 7-37 software description 7-39 Selective Dispatch Group Call. See SDGC Service Area 2-8 Service Resource Module. See SRM Short Message Service, See SMS Signaling Transfer Point, See STP Simple Network ManagementProtocol, See SNMP simplified SR15.0 components diagram 1-8 SNMP 10-5 Spectrum Peripheral Module Enclosure, See SPME Split Mode 8-7 Interconnect subsystem 7-8 Radio AccessNetwork subsystem 6-7 RSL card Configuration 7-17 Split Mode Configuration Interconnect subsystem 7-8 SSC 8-13 Standard System Controller, See SSC subscriber databases 2-5 subscriber services 1-4 Supernode Data Manager / Fault Tolerant, See SDM/FT
R
Radio Access Network subsystem architecture 6-5 description 6-4 EBTS overview 6-10 general call functionality 6-7 iMU and EAS 6-16 key network elements 6-8
SR 16.0 05-Oct-07
Index
SuperNode Data Manager/Fault Tolerant, See SDM/FT Switch Matrix 7-28 system introduction 2-3
T
Talk group ID 8-11 TS40 AlphaServer 8-46 Two-way Dispatch Radio 1-5
U
Urban ID 8-11 urbans 2-8 user interface menu 9-21
V
Virtual Router Redundancy Protocol 7-25 Virtual Router Redundancy Protocol, See VRRP Visiting Home Location Register, See VLR VLR 7-29 mirroring 9-22 Voice and Signaling Gateway, See VSGW VRRP 8-43 VRRP support on iVPU 8-41 VSGW 8-16, 8-26, 11-4- 11-5 billing 11-7
W
wireless telephony 1-5
X
XCIPIO 5-6
Index
SR 16.0 05-Oct-07
MOTOROLA and the Stylized M logo are registered in the U.S. Patent and Trademark Office. All other product or service names are the property of their respective owners. Motorola, Inc. 2007
6871000P02-A