Quantcast
Channel: VIRL
Viewing all articles
Browse latest Browse all 1811

VM Maestro - there are no active hosts

$
0
0

@bgp4fun wrote:

I'm having problems an fresh OVA/ESXi 6.0 install. Having completed the install, everything looks okay, but when I use VM Maestro (directly on the VM or remotely) I get a problem occurred dialog with 'There are no active hosts'.

Here's the virl_health_status output:

Disk usage:
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/virl--vg-root   71G   27G   44G  39% /
none                       4.0K     0  4.0K   0% /sys/fs/cgroup
udev                       7.8G  4.0K  7.8G   1% /dev
tmpfs                      1.6G  3.6M  1.6G   1% /run
none                       5.0M     0  5.0M   0% /run/lock
none                       7.8G  152K  7.8G   1% /run/shm
none                       100M   20K  100M   1% /run/user
/dev/sda1                  236M   87M  137M  39% /boot

CPU info:
4 Intel(R) Core(TM) i5-5250U CPU @ 1.60GHz cores

RAM info:
Total RAM capacity available on host: 15GB

NTP servers:
pool.ntp.org iburst
us.pool.ntp.org iburst

remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
+204.235.61.9    128.10.19.24     2 u   53   64  377   35.607  -13.402   5.994
+104.131.51.97   199.102.46.70    2 u   56   64  377   49.262  -13.658   3.784
*66.228.59.187   200.98.196.212   2 u   54   64  377   52.198   -8.659   3.911
-149.20.68.17    66.220.9.122     2 u   28   64  377   72.414   -3.997   5.761
+178.18.16.124   132.163.4.102    2 u   46   64  377   77.341  -12.057   4.116

Interface addresses:
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
2: eth0    inet 192.168.12.7/24 brd 192.168.12.255 scope global eth0\       valid_lft forever preferred_lft forever
3: eth1    inet 172.16.1.254/24 brd 172.16.1.255 scope global eth1\       valid_lft forever preferred_lft forever
4: eth2    inet 172.16.2.254/24 brd 172.16.2.255 scope global eth2\       valid_lft forever preferred_lft forever
5: eth3    inet 172.16.3.254/24 brd 172.16.3.255 scope global eth3\       valid_lft forever preferred_lft forever
6: eth4    inet 172.16.10.250/24 brd 172.16.10.255 scope global eth4\       valid_lft forever preferred_lft forever
7: lxcbr0    inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0\       valid_lft forever preferred_lft forever
8: virbr0    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0\       valid_lft forever preferred_lft forever

MySQL is available

Local Salt ID and Domain:
local: F69A48CD.virl.info

RabbitMQ status:
[{pid,3118},
 {running_applications,[{rabbit,"RabbitMQ","3.2.4"},
                        {os_mon,"CPO  CXC 138 46","2.2.14"},
                        {mnesia,"MNESIA  CXC 138 12","4.11"},
                        {xmerl,"XML parser","1.3.5"},
                        {sasl,"SASL  CXC 138 11","2.3.4"},
                        {stdlib,"ERTS  CXC 138 10","1.19.4"},
                        {kernel,"ERTS  CXC 138 10","2.16.4"}]},
 {os,{unix,linux}},
 {erlang_version,"Erlang R16B03 (erts-5.10.4) [source] [64-bit] [smp:4:4] [async-threads:30] [kernel-poll:true]\n"},
 {memory,[{total,95145248},
          {connection_procs,838800},
          {queue_procs,462152},
          {plugins,0},
          {other_proc,13570824},
          {mnesia,136512},
          {mgmt_db,0},
          {msg_index,47056},
          {other_ets,807528},
          {binary,57803736},
          {code,16514185},
          {atom,594537},
          {other_system,4369918}]},
 {vm_memory_high_watermark,0.4},
 {vm_memory_limit,6675021824},
 {disk_free_limit,50000000},
 {disk_free,46279905280},
 {file_descriptors,[{total_limit,924},
                    {total_used,25},
                    {sockets_limit,829},
                    {sockets_used,23}]},
 {processes,[{limit,1048576},{used,357}]},
 {run_queue,0},
 {uptime,1597}]
RabbitMQ configured for Nova is available
RabbitMQ configured for Neutron and Glance is available

OpenStack identity service for STD is available
OpenStack image service for STD is available
OpenStack compute service for STD is available
OpenStack network service for STD is available

OpenStack compute services:
[
  {
    "disabled_reason": null,
    "updated_at": "2015-06-27T15:50:16.000000",
    "status": "enabled",
    "host": "virl",
    "id": 6,
    "zone": "internal",
    "state": "down",
    "binary": "nova-cert"
  },
  {
    "disabled_reason": null,
    "updated_at": "2015-06-27T15:50:09.000000",
    "status": "enabled",
    "host": "virl",
    "id": 7,
    "zone": "internal",
    "state": "down",
    "binary": "nova-consoleauth"
  },
  {
    "disabled_reason": null,
    "updated_at": "2015-06-27T15:50:14.000000",
    "status": "enabled",
    "host": "virl",
    "id": 8,
    "zone": "internal",
    "state": "down",
    "binary": "nova-scheduler"
  },
  {
    "disabled_reason": null,
    "updated_at": "2015-06-27T15:50:15.000000",
    "status": "enabled",
    "host": "virl",
    "id": 9,
    "zone": "internal",
    "state": "down",
    "binary": "nova-conductor"
  },
  {
    "disabled_reason": null,
    "updated_at": "2015-06-27T15:50:16.000000",
    "status": "enabled",
    "host": "virl",
    "id": 10,
    "zone": "nova",
    "state": "down",
    "binary": "nova-compute"
  }
]
WARNING:
Service "cert" is down.
Service "consoleauth" is down.
Service "scheduler" is down.
Service "conductor" is down.
Service "compute" is down.


OpenStack network agents:
[
  {
    "host": "virl",
    "topic": "N/A",
    "admin_state_up": true,
    "started_at": "2015-06-27 15:48:44",
    "description": null,
    "alive": false,
    "id": "0653c849-2b76-40c0-bd28-937a9f592922",
    "agent_type": "Linux bridge agent",
    "binary": "neutron-linuxbridge-agent",
    "created_at": "2015-04-27 16:05:55",
    "heartbeat_timestamp": "2015-06-27 15:50:10",
    "configurations": {
      "tunnel_types": [
        "vxlan"
      ],
      "tunneling_ip": "172.16.10.250",
      "interface_mappings": {
        "flat": "eth1",
        "ext-net": "eth3",
        "flat1": "eth2"
      },
      "devices": 8,
      "l2_population": false
    }
  },
  {
    "host": "virl",
    "topic": "l3_agent",
    "admin_state_up": true,
    "started_at": "2015-06-27 15:48:44",
    "description": null,
    "alive": false,
    "id": "19afc5be-20a8-44d7-9361-147486615ae6",
    "agent_type": "L3 agent",
    "binary": "neutron-l3-agent",
    "created_at": "2015-04-27 16:06:26",
    "heartbeat_timestamp": "2015-06-27 15:50:09",
    "configurations": {
      "interfaces": 2,
      "interface_driver": "neutron.agent.linux.interface.BridgeInterfaceDriver",
      "routers": 1,
      "use_namespaces": true,
      "handle_internal_only_routers": true,
      "router_id": "",
      "gateway_external_network_id": "",
      "ex_gw_ports": 1,
      "floating_ips": 0
    }
  },
  {
    "host": "virl",
    "topic": "dhcp_agent",
    "admin_state_up": true,
    "started_at": "2015-06-27 15:44:41",
    "description": null,
    "alive": false,
    "id": "6446f918-cd1f-4cde-83b9-8713547188a7",
    "agent_type": "DHCP agent",
    "binary": "neutron-dhcp-agent",
    "created_at": "2015-04-27 16:06:26",
    "heartbeat_timestamp": "2015-06-27 15:50:08",
    "configurations": {
      "dhcp_driver": "neutron.agent.linux.dhcp.Dnsmasq",
      "subnets": 5,
      "networks": 5,
      "dhcp_lease_duration": 86400,
      "use_namespaces": true,
      "ports": 1
    }
  },
  {
    "host": "virl",
    "topic": "N/A",
    "admin_state_up": true,
    "started_at": "2015-06-27 15:44:41",
    "description": null,
    "alive": false,
    "id": "db7222f6-f207-47f5-abf9-8c57161af83e",
    "agent_type": "Metadata agent",
    "binary": "neutron-metadata-agent",
    "created_at": "2015-04-27 16:06:25",
    "heartbeat_timestamp": "2015-06-27 15:50:08",
    "configurations": {
      "metadata_proxy_socket": "/var/lib/neutron/metadata_proxy",
      "nova_metadata_port": 8775,
      "nova_metadata_ip": "192.168.12.7"
    }
  }
]

STD server configuration:
VIRL environment priority (lowest->highest): global conf, local conf, SHELL env, CLI args
Global config can be defined at "/etc/virl/virl.cfg"
Local config can be defined at "/home/virl/virl.cfg"
To set as SHELL ENV var: export NAME=value
To unset as SHELL ENV var: unset NAME
=========================================================
Your global config:
VIRL_DEBUG = False
VIRL_STD_HOST = 0.0.0.0
VIRL_STD_DIR = /var/local/virl
VIRL_STD_PORT = 19399
VIRL_STD_USER_NAME = uwmadmin
VIRL_STD_PROCESS_COUNT = 20
=========================================================
Your local config:
=========================================================
Your SHELL environment:
=========================================================
Used values:
VIRL_STD_PORT = 19399
VIRL_STD_USER_NAME = uwmadmin
VIRL_DEBUG = False
VIRL_STD_HOST = 0.0.0.0
VIRL_STD_PROCESS_COUNT = 20
VIRL_STD_DIR = /var/local/virl
=========================================================
STD/UWM is initialized with the following users: uwmadmin, guest
STD server on url http://localhost:19399 is listening, server version 0.10.14.20
UWM server on url http://localhost:19400 is listening, server version 0.10.14.20
OpenStack cluster nodes info:
{
  "(all_hosts)": {
    "(total)": {
      "vcpus": 4,
      "ram": 15914,
      "disk": 70
    },
    "(used_max)": {
      "vcpus": 0,
      "ram": 0,
      "disk": 0
    },
    "(used_now)": {
      "vcpus": 0,
      "ram": 512,
      "disk": 0
    }
  },
  "virl": {
    "(total)": {
      "vcpus": 4,
      "ram": 15914,
      "disk": 70
    },
    "(used_max)": {
      "vcpus": 0,
      "ram": 0,
      "disk": 0
    },
    "(used_now)": {
      "vcpus": 0,
      "ram": 512,
      "disk": 0
    }
  }
}

STD server version:
{
  "kvm-ok": "INFO: /dev/kvm exists\nKVM acceleration can be used",
  "warning-fields": [],
  "features": [
    "no jobs",
    "subtypes",
    "licensing",
    "systemlogs",
    "list",
    "status:nodes",
    "status:expires",
    "export",
    "export:updated-addresses",
    "export:updated-startup-configs",
    "export:running-configs",
    "export:startup-configs",
    "update:start",
    "update:stop",
    "update:link-state",
    "launch:expires",
    "launch:partial",
    "capture:offline",
    "capture:live",
    "admin:list",
    "admin:stop",
    "admin:update:expires",
    "vnc-console",
    "serial-port"
  ],
  "version": "1.2",
  "virl-version": "0.10.14.20",
  "endpoint-present": false,
  "host-cpu-model": "Intel(R) Core(TM) i5-5250U CPU @ 1.60GHz",
  "client-compatible": true,
  "uwm-url": "http://192.168.12.7:19400",
  "distro-version": "Ubuntu 14.04.2 LTS",
  "host-cpu-count": 4,
  "openstack-version": "2014.1.4",
  "host-ram": 15,
  "kvm-version": "QEMU emulator version 2.0.0 (Debian 2.0.0+dfsg-2ubuntu1.13), Copyright (c) 2003-2008 Fabrice Bellard",
  "kernel-version": "3.16.0-41-generic x86_64"
}

STD server licensing:
{
  "uwm-url": "http://192.168.12.7:19400/admin/salt/",
  "product-capacity": 15,
  "product-usage": 0,
  "product-expires": 7,
  "product-license": [
    "us-virl-salt.cisco.com"
  ],
  "hostid": "F69A48CD.virl.info",
  "features": [
    "Cariden.MATE.import",
    "Cariden.MATE.export"
  ]
}

STD server autonetkit status:
{
  "warning-fields": [],
  "autonetkit-cisco-version": "VIRL Configuration Engine 0.15.8",
  "version": "1.0",
  "autonetkit-version": "autonetkit 0.15.3",
  "virl-version": "0.10.14.20"
}

Posts: 2

Participants: 2

Read full topic


Viewing all articles
Browse latest Browse all 1811

Latest Images

Trending Articles



Latest Images