@Sage wrote:
Hi all,
We recently tried to reinstall VIRL to the new 1.0 version. At first, UMW was working but I had issues with running a simulation. At some point I rebooted the server, and now UWM and STD are reporting as down.
Here is the output of "sudo virl_health_status". Any ideas?
Disk usage:
Filesystem Size Used Avail Use% Mounted on
udev 126G 4.0K 126G 1% /dev
tmpfs 26G 1.7M 26G 1% /run
/dev/sda3 243G 30G 201G 13% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 126G 72K 126G 1% /run/shm
none 100M 4.0K 100M 1% /run/user
/dev/sda1 945M 81M 800M 10% /bootCPU info:
32 Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz cores
Load: 0.6%, 0.6%, 0.7% for the past 1, 5, and 15 minutes
Overcommitted to 96 cores (multiplier 3.0)RAM info:
Total RAM capacity available on host: 251GB
Free RAM available on host: 246GB
Total overcommitted RAM capacity available on host: 503GB (multiplier 2)
RAM capacity required by currently running nodes: 0GBNTP servers:
pool.ntp.org iburst
us.pool.ntp.org iburstremote refid st t when poll reach delay offset jitter
*129.6.15.29 .ACTS. 1 u 919 1024 375 96.878 21.282 7.860
+96.44.142.5 164.244.221.197 2 u 676 1024 377 12.950 -13.540 5.199
-173.255.215.209 127.67.113.92 2 u 642 1024 377 54.978 -20.415 8.214
+208.75.89.4 198.60.22.240 2 u 863 1024 377 43.144 -12.811 5.556Interface addresses:
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
2: eth0 inet 130.39.4.240/24 brd 130.39.4.255 scope global eth0\ valid_lft forever preferred_lft forever
4: dummy1 inet 172.16.1.254/24 brd 172.16.1.255 scope global dummy1\ valid_lft forever preferred_lft forever
5: dummy2 inet 172.16.2.254/24 brd 172.16.2.255 scope global dummy2\ valid_lft forever preferred_lft forever
7: dummy4 inet 172.16.10.250/24 brd 172.16.10.255 scope global dummy4\ valid_lft forever preferred_lft forever
40: virbr0 inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0\ valid_lft forever preferred_lft forever
51: brq8a5be1cd-51 inet 172.16.3.254/24 brd 172.16.3.255 scope global brq8a5be1cd-51\ valid_lft forever preferred_lft foreverMySQL is available
Salt ID: F1D44457.virledu.info
Salt Master: ['us-1.virl.info', 'us-2.virl.info', 'us-3.virl.info', 'us-4.virl.info']
Salt Ping: SuccessRabbitMQ status:
DEBUG 2015-12-15 07:26:45,321 amqp Start from server, version: 0.9, properties: {u'cluster_name': u'rabbit@virl', u'version': u'3.4.3', u'copyright': u'Copyright (C) 2007-2014 GoPivotal, Inc.', u'platform': u]
DEBUG 2015-12-15 07:26:45,323 amqp Open OK!
DEBUG 2015-12-15 07:26:45,340 amqp Start from server, version: 0.9, properties: {u'cluster_name': u'rabbit@virl', u'version': u'3.4.3', u'copyright': u'Copyright (C) 2007-2014 GoPivotal, Inc.', u'platform': u]
DEBUG 2015-12-15 07:26:45,342 amqp Open OK!
[{pid,5579},
{running_applications,[{rabbit,"RabbitMQ","3.4.3"},
{os_mon,"CPO CXC 138 46","2.2.14"},
{mnesia,"MNESIA CXC 138 12","4.11"},
{xmerl,"XML parser","1.3.5"},
{sasl,"SASL CXC 138 11","2.3.4"},
{stdlib,"ERTS CXC 138 10","1.19.4"},
{kernel,"ERTS CXC 138 10","2.16.4"}]},
{os,{unix,linux}},
{erlang_version,"Erlang R16B03 (erts-5.10.4) [source] [64-bit] [smp:32:32] [async-threads:30] [kernel-poll:true]\n"},
{memory,[{total,130432312},
{connection_readers,454416},
{connection_writers,77592},
{connection_channels,267144},
{connection_other,628712},
{queue_procs,830376},
{queue_slave_procs,0},
{plugins,0},
{other_proc,15196088},
{mnesia,173296},
{mgmt_db,0},
{msg_index,75872},
{other_ets,822352},
{binary,86732104},
{code,16403685},
{atom,561761},
{other_system,8208914}]},
{alarms,[]},
{listeners,[{clustering,25672,"::"},{amqp,5672,"0.0.0.0"}]},
{vm_memory_high_watermark,0.4},
{vm_memory_limit,108115499417},
{disk_free_limit,50000000},
{disk_free,215461822464},
{file_descriptors,[{total_limit,924},
{total_used,36},
{sockets_limit,829},
{sockets_used,34}]},
{processes,[{limit,1048576},{used,544}]},
{run_queue,0},
{uptime,63579}]RabbitMQ configured for Nova is available
RabbitMQ configured for Neutron and Glance is availableOpenStack network service for STD is available
OpenStack compute service for STD is available
OpenStack identity service for STD is available
OpenStack image service for STD is availableOpenStack compute services:
[
{
"state": "up",
"binary": "nova-cert",
"host": "virl",
"id": 6,
"status": "enabled",
"disabled_reason": null,
"updated_at": "2015-12-15T13:26:46.000000",
"zone": "internal"
},
{
"state": "up",
"binary": "nova-consoleauth",
"host": "virl",
"id": 7,
"status": "enabled",
"disabled_reason": null,
"updated_at": "2015-12-15T13:26:46.000000",
"zone": "internal"
},
{
"state": "up",
"binary": "nova-scheduler",
"host": "virl",
"id": 8,
"status": "enabled",
"disabled_reason": null,
"updated_at": "2015-12-15T13:26:46.000000",
"zone": "internal"
},
{
"state": "up",
"binary": "nova-conductor",
"host": "virl",
"id": 9,
"status": "enabled",
"disabled_reason": null,
"updated_at": "2015-12-15T13:26:46.000000",
"zone": "internal"
},
{
"state": "up",
"binary": "nova-compute",
"host": "virl",
"id": 10,
"status": "enabled",
"disabled_reason": null,
"updated_at": "2015-12-15T13:26:49.000000",
"zone": "nova"
}
]OpenStack network agents:
[
{
"binary": "neutron-metadata-agent",
"heartbeat_timestamp": "2015-12-15 13:26:39",
"configurations": {
"nova_metadata_port": 8775,
"metadata_proxy_socket": "/var/lib/neutron/metadata_proxy",
"nova_metadata_ip": "127.0.0.1"
},
"description": null,
"topic": "N/A",
"id": "27ad59aa-8ed2-4163-84dd-1c5dd961e042",
"admin_state_up": true,
"created_at": "2015-11-24 01:36:15",
"agent_type": "Metadata agent",
"alive": true,
"started_at": "2015-12-14 19:48:09",
"host": "virl"
},
{
"binary": "neutron-l3-agent",
"heartbeat_timestamp": "2015-12-15 13:26:39",
"configurations": {
"external_network_bridge": "",
"handle_internal_only_routers": true,
"router_id": "",
"agent_mode": "legacy",
"ex_gw_ports": 1,
"routers": 1,
"interfaces": 2,
"floating_ips": 0,
"use_namespaces": true,
"gateway_external_network_id": "",
"interface_driver": "neutron.agent.linux.interface.BridgeInterfaceDriver"
},
"description": null,
"topic": "l3_agent",
"id": "6facb4ca-9c4c-432f-b909-b3d0bb969ef6",
"admin_state_up": true,
"created_at": "2015-11-24 01:36:15",
"agent_type": "L3 agent",
"alive": true,
"started_at": "2015-12-14 19:48:09",
"host": "virl"
},
{
"binary": "neutron-dhcp-agent",
"heartbeat_timestamp": "2015-12-15 13:26:39",
"configurations": {
"subnets": 2,
"dhcp_driver": "neutron.agent.linux.dhcp.Dnsmasq",
"networks": 2,
"ports": 4,
"dhcp_lease_duration": 86400,
"use_namespaces": true
},
"description": null,
"topic": "dhcp_agent",
"id": "8f16a7e6-0e86-424f-b351-1db412a5efba",
"admin_state_up": true,
"created_at": "2015-11-24 01:36:15",
"agent_type": "DHCP agent",
"alive": true,
"started_at": "2015-12-14 19:48:08",
"host": "virl"
},
{
"binary": "neutron-linuxbridge-agent",
"heartbeat_timestamp": "2015-12-15 13:26:39",
"configurations": {
"l2_population": false,
"devices": 5,
"interface_mappings": {
"flat": "dummy1",
"ext-net": "dummy3",
"flat1": "dummy2"
},
"tunneling_ip": "172.16.10.250",
"tunnel_types": [
"vxlan"
]
},
"description": null,
"topic": "N/A",
"id": "e66ae23d-a488-4cb4-af33-acaa4ea25cfa",
"admin_state_up": true,
"created_at": "2015-11-24 01:36:15",
"agent_type": "Linux bridge agent",
"alive": true,
"started_at": "2015-12-14 18:34:47",
"host": "virl"
}
]OpenVPN services:
OpenVPN server:
Disabled
StoppedStatus:
No status information presentAutoNetkit services:
virl-vis-mux running
ank_cisco_webserver running
ank-cisco-webserver listening on port 19401
virl_live_vis_webserver running
virl-vis-webserver listening on port 19402
virl_live_vis_processor runningVIRL environment priority (lowest->highest): global conf, local conf, SHELL env, CLI args
Global config can be defined at "/etc/virl/virl.cfg"
Local config can be defined at "/root/virl.cfg"
To set as SHELL ENV var: export NAME=value
To unset as SHELL ENV var: unset NAME
=========================================================
Your global config:
VIRL_STD_USER_NAME = uwmadmin
VIRL_STD_DIR = /var/local/virl
VIRL_STD_HOST = 0.0.0.0
VIRL_DEBUG = False
VIRL_STD_PORT = 19399
VIRL_STD_PROCESS_COUNT = 20
=========================================================
Your local config:
=========================================================
Your SHELL environment:
=========================================================
Used values:
VIRL_STD_HOST = 0.0.0.0
VIRL_DEBUG = False
VIRL_STD_USER_NAME = uwmadmin
VIRL_STD_PORT = 19399
VIRL_STD_PROCESS_COUNT = 20
VIRL_STD_DIR = /var/local/virl
=========================================================
STD/UWM is initialized with the following users: uwmadmin,Sam
STD server on url http://localhost:19399 seems to be broken (returned status code 503)
UWM server on url http://localhost:19400 is down
Webmux server on url http://localhost:19403 is listeningSTD server version:
STD server licensing:
STD server autonetkit status:
Posts: 1
Participants: 1