技术控

    今日:80| 主题:49270
收藏本版 (1)
最新软件应用技术尽在掌握

[其他] Extending an Exadata Eighth Rack to a Quarter Rack

[复制链接]
稍縱即逝锝° 发表于 2016-10-3 22:39:04
245 3

立即注册CoLaBug.com会员,免费获得投稿人的专业资料,享用更多功能,玩转个人品牌!

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
In the past year I’ve done a lot of Exadata deployments and probably half of them were eighth racks. It’s one of those temporary things – let’s do it now but we’ll change it later. It’s the same with the upgrades – I’ve never seen anyone doing an upgrade from an eighth rack to a quarter. However, a month ago one of our customers asked me to upgrade their three X5-2 HC 4TB units from an eighth to a quarter rack configuration.
    What’s the different between an eighth rack and a quarter rack

    X5-2 Eighth Rack and X5-2 Quarter rack have the same hardware and look exactly the same. The only difference is that only half of the compute power and storage space on an eighth rack is usable. In an eighth rack the compute nodes have half of their CPUs activated – 18 cores per server. It’s the same for the storage cells – 16 cores per cell, six hard disks and two flash cards are active.
    While this is true for X3, X4 and X5 things have slightly changed for X6. Up until now, eighth rack configurations had all the hard disks and flash cards installed but only half of them were usable. The new Exadata X6-2 Eighth Rack High Capacity configuration has half of the hard disks and flash cards removed. To extend X6-2 HC to a quarter rack you need to add high capacity disks and flash cards to the system. This is      only required for High Capacityconfigurations because X6-2 Eighth Rack Extreme Flash storage servers have all flash drives enabled.   
    What are the main steps of the upgrade:

   
          
  • Activate Database Server Cores      
  • Activate Storage Server Cores and disks      
  • Create eighth new cell disks per cell – six hard disk and two flash disks      
  • Create all grid disks (DATA01, RECO01, DBFS_DG) and add them to the disk groups      
  • Expand the flashcache onto the new flash disks      
  • Recreate the flashlog on all flash cards   
    Here are few things you need to keep in mind before you start:

   
          
  • Compute nodes upgrade require a reboot for the new changes to come into action.      
  • Storage cells upgrade do NOT require a reboot and it is an online operation.      
  • Upgrade work is a low risk – your data is secure and redundant at all times.      
  •         This post is about X5 upgrade. If you were to upgrade X6 then before you begin you need to install the six 8 TB disks in HDD slots 6 – 11 and install the two F320 flash cards in PCIe slots 1 and 4.   
    Upgrade of the compute nodes

    Well, this is really straight forward and you can do it at any time. Remember that you need to restart the server for the change to come into action:
   
  1. dbmcli -e alter dbserver pendingCoreCount=36 force
  2. DBServer exa01db01 successfully altered. Please reboot the system to make the new pendingCoreCount effective.
复制代码
   Reboot the server to activate the new cores. It will take around 10 minutes for the server to come back online.
    Check the number of cores after server comes back:
   
  1. dbmcli -e list dbserver attributes coreCount
  2. cpuCount:               36/36
复制代码
   Make sure you’ve got the right number of cores. These systems allow capacity on demand (CoD) and in my case customer wanted to me activate only 28 cores per server.
    Upgrade of the storage cells   

    Like I said earlier, the upgrade of the storage cells does NOT require reboot and can be done online at any time.
    The following needs to be done on each cell. You can, of course, use dcli but I wanted to do that cell by cell and make sure each operation finishes successfully.
    1. First, upgrade the configuration from an eighth to a quarter rack:

   
  1. [[email protected] ~]# cellcli -e list cell attributes cpuCount,eighthRack
  2. cpuCount:               16/32
  3. eighthRack:             TRUE

  4. [[email protected] ~]# cellcli -e alter cell eighthRack=FALSE
  5. Cell exa01cel01 successfully altered

  6. [[email protected] ~]# cellcli -e list cell attributes cpuCount,eighthRack
  7. cpuCount:               32/32
  8. eighthRack:             FALSE
复制代码
   2. Create cell disks on top of the newly activated physical disks

    Like I said – this is an online operation and you can do it at any time:
   
  1. [[email protected] ~]# cellcli -e create celldisk all
  2. CellDisk CD_06_exa01cel01 successfully created
  3. CellDisk CD_07_exa01cel01 successfully created
  4. CellDisk CD_08_exa01cel01 successfully created
  5. CellDisk CD_09_exa01cel01 successfully created
  6. CellDisk CD_10_exa01cel01 successfully created
  7. CellDisk CD_11_exa01cel01 successfully created
  8. CellDisk FD_02_exa01cel01 successfully created
  9. CellDisk FD_03_exa01cel01 successfully created
复制代码
   3. Expand the flashcache on to the new flash cards

    This is again an online operation and it can be run at any time:
   
  1. [[email protected] ~]# cellcli -e alter flashcache all
  2. Flash cache exa01cel01_FLASHCACHE altered successfully
复制代码
   4. Recreate the flashlog

    The flashlog is always 512MB big but to make use of the new flash cards it has to be recreated. Use the DROP FLASHLOG command to drop the flash log, and then use the CREATE FLASHLOG command to create a flash log. The DROP FLASHLOG command can be run at runtime, but the command does not complete until all redo data on the flash disk is written to hard disk.
    Here is an important note from Oracle:
    If FORCE is not specified, then the DROP FLASHLOG command fails if there is any saved redo. If FORCE is specified, then all saved redo is purged, and Oracle Exadata Smart Flash Log is removed.
   
  1. [[email protected] ~]# cellcli -e drop flashlog
  2. Flash log exa01cel01_FLASHLOG successfully dropped
复制代码
   5. Create grid disks

    The best way to do that is to query the current grid disks size and use to create the new grid disks. Use the following queries to obtain the size for each grid disk. We use disk 02 because the first two does have DBFS_DG on them.
   
  1. [[email protected] ~]# dcli -g cell_group -l root cellcli -e "list griddisk attributes name, size where name like \'DATA.*02.*\'"
  2. exa01cel01: DATA01_CD_02_exa01cel01        2.8837890625T
  3. [[email protected] ~]# dcli -g cell_group -l root cellcli -e "list griddisk attributes name, size where name like \'RECO.*02.*\'"
  4. exa01cel01: RECO01_CD_02_exa01cel01        738.4375G
  5. [[email protected] ~]# dcli -g cell_group -l root cellcli -e "list griddisk attributes name, size where name like \'DBFS_DG.*02.*\'"
  6. exa01cel01: DBFS_DG_CD_02_exa01cel01       33.796875G
复制代码
   Then you can either generate the commands and run them on each cell or use dcli to create them on all three cells:
   
  1. dcli -g cell_group -l celladmin "cellcli -e create griddisk DATA_CD_06_\`hostname -s\` celldisk=CD_06_\`hostname -s\`,size=2.8837890625T"
  2. dcli -g cell_group -l celladmin "cellcli -e create griddisk DATA_CD_07_\`hostname -s\` celldisk=CD_07_\`hostname -s\`,size=2.8837890625T"
  3. dcli -g cell_group -l celladmin "cellcli -e create griddisk DATA_CD_08_\`hostname -s\` celldisk=CD_08_\`hostname -s\`,size=2.8837890625T"
  4. dcli -g cell_group -l celladmin "cellcli -e create griddisk DATA_CD_09_\`hostname -s\` celldisk=CD_09_\`hostname -s\`,size=2.8837890625T"
  5. dcli -g cell_group -l celladmin "cellcli -e create griddisk DATA_CD_10_\`hostname -s\` celldisk=CD_10_\`hostname -s\`,size=2.8837890625T"
  6. dcli -g cell_group -l celladmin "cellcli -e create griddisk DATA_CD_11_\`hostname -s\` celldisk=CD_11_\`hostname -s\`,size=2.8837890625T"
  7. dcli -g cell_group -l celladmin "cellcli -e create griddisk RECO_CD_06_\`hostname -s\` celldisk=CD_06_\`hostname -s\`,size=738.4375G"
  8. dcli -g cell_group -l celladmin "cellcli -e create griddisk RECO_CD_07_\`hostname -s\` celldisk=CD_07_\`hostname -s\`,size=738.4375G"
  9. dcli -g cell_group -l celladmin "cellcli -e create griddisk RECO_CD_08_\`hostname -s\` celldisk=CD_08_\`hostname -s\`,size=738.4375G"
  10. dcli -g cell_group -l celladmin "cellcli -e create griddisk RECO_CD_09_\`hostname -s\` celldisk=CD_09_\`hostname -s\`,size=738.4375G"
  11. dcli -g cell_group -l celladmin "cellcli -e create griddisk RECO_CD_10_\`hostname -s\` celldisk=CD_10_\`hostname -s\`,size=738.4375G"
  12. dcli -g cell_group -l celladmin "cellcli -e create griddisk RECO_CD_11_\`hostname -s\` celldisk=CD_11_\`hostname -s\`,size=738.4375G"
  13. dcli -g cell_group -l celladmin "cellcli -e create griddisk DBFS_DG_CD_06_\`hostname -s\` celldisk=CD_06_\`hostname -s\`,size=33.796875G"
  14. dcli -g cell_group -l celladmin "cellcli -e create griddisk DBFS_DG_CD_07_\`hostname -s\` celldisk=CD_07_\`hostname -s\`,size=33.796875G"
  15. dcli -g cell_group -l celladmin "cellcli -e create griddisk DBFS_DG_CD_08_\`hostname -s\` celldisk=CD_08_\`hostname -s\`,size=33.796875G"
  16. dcli -g cell_group -l celladmin "cellcli -e create griddisk DBFS_DG_CD_09_\`hostname -s\` celldisk=CD_09_\`hostname -s\`,size=33.796875G"
  17. dcli -g cell_group -l celladmin "cellcli -e create griddisk DBFS_DG_CD_10_\`hostname -s\` celldisk=CD_10_\`hostname -s\`,size=33.796875G"
  18. dcli -g cell_group -l celladmin "cellcli -e create griddisk DBFS_DG_CD_11_\`hostname -s\` celldisk=CD_11_\`hostname -s\`,size=33.796875G"
复制代码
   6. The final step is to add newly created grid disks to ASM

    Connect to the ASM instance using sqlplus as sysasm and disable the appliance mode:
   
  1. SQL> ALTER DISKGROUP DATA01 set attribute 'appliance.mode'='FALSE';
  2. SQL> ALTER DISKGROUP RECO01 set attribute 'appliance.mode'='FALSE';
  3. SQL> ALTER DISKGROUP DBFS_DG set attribute 'appliance.mode'='FALSE';
复制代码
   Add the disks to the disk groups, you can either queue them on one instance or run them on both ASM instances in parallel:
   
  1. SQL> ALTER DISKGROUP DATA01 ADD DISK 'o/*/DATA_CD_0[6-9]*',' o/*/DATA_CD_1[0-1]*' REBALANCE POWER 128;
  2. SQL> ALTER DISKGROUP RECO01 ADD DISK 'o/*/RECO_CD_0[6-9]*',' o/*/RECO_CD_1[0-1]*' REBALANCE POWER 128;
  3. SQL> ALTER DISKGROUP DBFS_DG ADD DISK 'o/*/DBFS_DG_CD_0[6-9]*',' o/*/DBFS_DG_CD_1[0-1]*' REBALANCE POWER 128;
复制代码
   Monitor the rebalance using      select * from gv$asm_operationsand once done change the appliance mode back to      TRUE:   
   
  1. dbmcli -e list dbserver attributes coreCount
  2. cpuCount:               36/360
复制代码
   And at this point, you are done with the upgrade. I strongly recommend you to run (latest) exachk report and make sure there are no issues with the configuration.
    A problem you might encounter is that the flash is not fully utilized, in my case I had 128MB free on each card:
   
  1. dbmcli -e list dbserver attributes coreCount
  2. cpuCount:               36/361
复制代码
   This seems to be a known bug and to fix it you need to recreate both flashcache and flashlog.
      References:      
              Extending an Eighth Rack to a Quarter Rack in Oracle Exadata Database Machine X4-2 and Later      
      Oracle Exadata Database Machine exachk or HealthCheck (Doc ID 1070954.1)
      Exachk fails due to incorrect flashcache size after upgrading from 1/8 to a 1/4 rack (Doc ID 2048491.1)
        Similar Posts:

   
          
  •         How to rename ASM disk groups in Exadata      
  •         Exadata onecommand fails at cell disk creation      
  •         Oracle Exadata X6 released      
  •         HP EVA4400 support for big LUNs, no thanks      
  •         Troubleshooting Oracle DBFS mount issues   
友荐云推荐




上一篇:Tomcat调优测试
下一篇:Using Docker to develop and deploy Django apps
酷辣虫提示酷辣虫禁止发表任何与中华人民共和国法律有抵触的内容!所有内容由用户发布,并不代表酷辣虫的观点,酷辣虫无法对用户发布内容真实性提供任何的保证,请自行验证并承担风险与后果。如您有版权、违规等问题,请通过"联系我们"或"违规举报"告知我们处理。

赖绪波 发表于 2016-10-5 04:44:38
元芳你怎么看?
回复 支持 反对

使用道具 举报

南莲 发表于 2016-10-5 18:37:52
速度,火钳刘明!
回复 支持 反对

使用道具 举报

fmlwnkwii 发表于 2016-11-21 08:16:40
呵呵,低调,低调!
回复 支持 反对

使用道具 举报

*滑动验证:
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

我要投稿

推荐阅读

扫码访问 @iTTTTT瑞翔 的微博
回页顶回复上一篇下一篇回列表手机版
手机版/CoLaBug.com ( 粤ICP备05003221号 | 文网文[2010]257号 )|网站地图 酷辣虫

© 2001-2016 Comsenz Inc. Design: Dean. DiscuzFans.

返回顶部 返回列表