As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/
Options

[Sysadmin] Routing to null

1727375777899

Posts

  • Options
    SniperGuySniperGuy SniperGuyGaming Registered User regular
    Hello sysadmin thread. I'm back at my job at a school and oh boy are things on fire.

    Every kid has iPads and our access points are ruckus brand stuff that has reached end of life. We're mandating use of more digital learning things this year in case we end up going remote again and so quite a few people have complained to me that they cannot get their iPad to work consistently on the network. Wired stuff seems to work just fine, but wireless is struggling.

    My assumption is that the APs are maxed out since too many people are using them at once and we need to upgrade APs to ones that aren't end of life. But since this costs a bunch of money, admin wants me to confirm, so I'm poking around trying to do that. It looks like technically specs say the APs can handle more connections than they have listed, but in practice I don't think it's working too well.

    Anything I can do to more easily confirm this is the actual issue?

  • Options
    SiliconStewSiliconStew Registered User regular
    As I recall, Ruckus APs have a Health page that lists things like latency, airtime utilization, capacity, and lists of reasons for client connection failures like association (can't connect to AP), authentication, (account can't log into network), DHCP (client can't get an IP address), etc that can help diagnose issues.

    I assume if you're using EOL equipment you also don't pay for a support contract with Ruckus to have them assist with diagnosis?

    Just remember that half the people you meet are below average intelligence.
  • Options
    twmjrtwmjr Registered User regular
    SniperGuy wrote: »
    Hello sysadmin thread. I'm back at my job at a school and oh boy are things on fire.

    Every kid has iPads and our access points are ruckus brand stuff that has reached end of life. We're mandating use of more digital learning things this year in case we end up going remote again and so quite a few people have complained to me that they cannot get their iPad to work consistently on the network. Wired stuff seems to work just fine, but wireless is struggling.

    My assumption is that the APs are maxed out since too many people are using them at once and we need to upgrade APs to ones that aren't end of life. But since this costs a bunch of money, admin wants me to confirm, so I'm poking around trying to do that. It looks like technically specs say the APs can handle more connections than they have listed, but in practice I don't think it's working too well.

    Anything I can do to more easily confirm this is the actual issue?

    There's a number of potential things going on; I'll try and keep it simple. Most importantly, AP spec sheets always say they can handle way more devices than they really can. Yes, theoretically a thousand devices or whatever can connect to an AP, but it's going to work like trash once anyone starts doing anything. For a high density deployment, you're ideally trying to keep the number of devices per AP to around 20 (give or take in either direction depending on various things).

    You'll want to grab a wifi spectrum analyzer -- there's free options for every major platform. Taking it to impacted areas, you can see what the spectrum is looking like. You're probably looking in particular for things like channel utilization and potentially interference - particularly on the 2.4GHz spectrum for the latter. You'll also want to make sure the RSSI value you're seeing in the areas the devices are is good enough for a stable connection. We generally shoot for -67 to support voice/video applications.

    If you're going to replace the APs, you should also make sure your design is actually going to support the use case you have. Using the spectrum analyzer will help understand this for the current design at least; it may be worthwhile doing some reading on "high density AP design" - it's probably out of budget to get a real survey done, but if you understand the parameters that would go into that, you should be able to propose a design that will at least come closer to meeting your needs than just throwing APs up in the ceiling.

  • Options
    lwt1973lwt1973 King of Thieves SyndicationRegistered User regular
    So sick and tired of our software provider telling us it's our fault and not theirs. Users getting errors using the site and I prove that it's not just in our environment as users who are working from home have the issue. I push and push after they say they don't see any errors on their side. So I finally get them to look at it from our end and surprise surprise, when we try and do some things on their site they can see errors.

    "He's sulking in his tent like Achilles! It's the Iliad?...from Homer?! READ A BOOK!!" -Handy
  • Options
    V1mV1m Registered User regular
    edited August 2020
    Nvm

    V1m on
  • Options
    V1mV1m Registered User regular
    Wrong thread hurr durr

  • Options
    LD50LD50 Registered User regular
    Why CenturyLink? Why?

  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    LD50 wrote: »
    Why CenturyLink? Why?

    they're so bad

    "But my ISP sucks too"

    Unless you deal with CTL on a regular basis, you have no idea how amazingly bad they are

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    lwt1973 wrote: »
    So sick and tired of our software provider telling us it's our fault and not theirs. Users getting errors using the site and I prove that it's not just in our environment as users who are working from home have the issue. I push and push after they say they don't see any errors on their side. So I finally get them to look at it from our end and surprise surprise, when we try and do some things on their site they can see errors.

    but that's what software vendors do

    that's intrinsic to being a software vendor

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    ThawmusThawmus +Jackface Registered User regular
    Feral wrote: »
    LD50 wrote: »
    Why CenturyLink? Why?

    they're so bad

    "But my ISP sucks too"

    Unless you deal with CTL on a regular basis, you have no idea how amazingly bad they are

    Had a conference call with a fiber provider, because I'm flirting with getting ethernet for all our branches, and they spent a good deal of time explaining to me that their customers get to talk to a low level NOC person when they call, but they're quite knowledgeable and often can help right away instead of reading scripts for an hour and getting you nowhere.

    "We have CenturyLink circuits and I used to resell their DSL, I understand what you're saying to me."

    They could not stop laughing.

    That's how much of a joke they are. I said their name to other technicians and sales folks and they thought it was funny.

    Twitch: Thawmus83
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    Thawmus wrote: »
    Feral wrote: »
    LD50 wrote: »
    Why CenturyLink? Why?

    they're so bad

    "But my ISP sucks too"

    Unless you deal with CTL on a regular basis, you have no idea how amazingly bad they are

    Had a conference call with a fiber provider, because I'm flirting with getting ethernet for all our branches, and they spent a good deal of time explaining to me that their customers get to talk to a low level NOC person when they call, but they're quite knowledgeable and often can help right away instead of reading scripts for an hour and getting you nowhere.

    "We have CenturyLink circuits and I used to resell their DSL, I understand what you're saying to me."

    They could not stop laughing.

    That's how much of a joke they are. I said their name to other technicians and sales folks and they thought it was funny.

    I have so many stories about how god-awful they are.

    We upgraded one of our Internet connections from 100Mb/s to 200Mb/s. This is fiber business Internet with a contract and an SLA, not some shitty residential connection. Because of various bureaucratic reasons with CTL, the upgrade necessitated getting a new Internet circuit and migrating the IPv4 block over.

    Since the IPv4 migration would involve downtime for public-facing services, it had to be done during a maintenance window. We chose Sunday 8am as the maintenance window.

    It took FOUR separate attempts, all during different Sunday maintenance windows, to successfully migrate the Internet connection.

    That's just for Internet services and IPv4 static IPs. We also pay a lot of money for always-on DDoS mitigation. They NEVER got DDoS mitigation successfully working on the new Internet circuit.

    Each call involved a DIFFERENT configuration for BGP & GRE tunnels. It was like they had no idea how to make DDoS mitigation work anymore, they were just throwing ideas against the wall to see where they splatter.

    On one call: "Try terminating the GRE tunnel on this IP. ... huh, I wonder why it's not working."
    On another call: "Wait, why did you try terminating the GRE tunnel on that IP? Of course that wouldn't work! Our SOC would never tell you to do that! You need to terminate the tunnel on this other IP! ... ... ... huh, that's not working either."

    Finally, WE said "y'know, forget GRE & BGP, just try to get the Internet circuit up and running with basic static routing and nothing fancy." Lo and behold, it worked! Great. At least we have fast Internet. We can work out DDoS mitigation later.

    CTL's approach to talking about DDoS mitigation with us was to schedule yet another call with salespeople to "teach us about the value of DDoS mitigation." Motherfuckers, we KNOW the value. Do you think we're fucking dense? The problem is that you can't get the fucking service working!

    And on the call it was clear that the salespeople had no idea what had happened. What they heard was "there's a customer who refused to turn on DDoS mitigation and now wants out of their contract." Nobody told them that the customer tried to turn on DDoS mitigation four times on a new Internet circuit and CTL fucked it up each time.

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    Then of course there was this shit back in April/May/June:
    Feral wrote: »
    Thawmus wrote: »
    Feral wrote: »
    Now I have a bona fide "am I the asshole?" question.

    We have layer-2 metro ethernet through an ISP who shall go unnamed but rhymes with Schmenturykink.

    Old metro ethernet is limited to 50Mbps at each remote office, and we wanted to upgrade to 100Mbps. This required them to put in whole new circuits.

    After spinning up the new circuits, iPerfs and other transfer tests weren't getting us anywhere near 100Mbps performance. Note that the same tests, done exactly the same way, on the old circuits got within 95% of subscribed bandwidth (~48Mbps each.) The new circuits were all over the map. Some were giving us decent speeds upwards of 80Mbps, some were as bad as 3Mbps, most were hovering around 10-20Mbps.

    After a full month of going back and forth with the ISP, we finally got an email that was forwarded over from some anonymous NOC tech in some datacenter (strongly implied to be a greybeard) that said "Make sure they have traffic shaping turned on. If they're using Cisco, they should have policy-maps."

    I've never had to do this before. So I researched it, applied some policy-maps with "shape average 104857600" and, voila, it worked. (Note that we didn't have any traffic shaping applied to the old circuits, and they worked fine without it.)

    So I guess my question to you guys is... is this a normal thing? Do you usually have to set traffic shaping with policy-maps over WANs? Should I have just done that first?
    What's the link speed on the interface? I know you're paying for 100Mbps, but what are you negotiated at? 100-Full or Gig?

    The only thing I can think of is that they're not policing your point-to-point traffic and so you were saturating your connection, iperf sees drops and ratchets down, you get poor results. What were your results when you did online speed tests instead of point-to-point iperfs?

    I can't tell you if it's odd or not, I have no experience with metro ethernet or frame relays or MPLS and the like, sadly. I'd say that the problem should happen often enough that they should take less than a month to solve it, though.

    My guess would be that they are in fact "policing" the connection, as in, you try to send more than your "allotted" bandwidth during the test and they just start unceremoniously dropping your traffic above the cap, leading to massive variance in real throughout due to resends and backoffs. But they are not implementing any actual traffic shaping on their own network, as shapers would queue traffic to give a consistent average. And high capacity traffic shapers costs money and CL is loath to do such a thing.

    Personally I've never had to traffic shape a private link like that. But I've never had the need to terminate those connections on our own equipment. We just ask for an Ethernet handoff and they supply the gear. As such, all the traffic shaping has been done by the ISP so we get the bandwidth we contracted for. So the implementation has always been invisible from our perspective as far as any special configs go.

    Thawmus: negotiated at 1Gbps.

    I didn't do online speed tests. Just iperfs, FTPS, and TFTP to internal servers over the WAN.

    SiliconStew: we also just ask for an Ethernet handoff. That's the weird thing. We have a Cisco ISR router at each location, one port for each router is on the new circuit, one port is on the old circuit. At each location, the ISP has two Adva EtherJack FSPs, one for each circuit, and we just patch a CAT6 from the Adva to our Cisco.

    Okay, well, three responses so far from folks saying that this is unusual. Good.

    The ISP wants to charge us for the month we spent chasing our tails over this before they told us that traffic shaping is a requirement. I'm trying to push for a credit for that month because we weren't getting within 90% of the subscribed bandwidth. Their response is, predictably, "we aren't responsible for misconfigurations inside customer owned equipment."

    My argument back is "you are responsible for communicating system requirements, especially unusual ones."

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    lwt1973lwt1973 King of Thieves SyndicationRegistered User regular
    Seems we are getting a reject message from a couple of domains.

    you are trying to use me as a relay but i have not been configured to let you do this. Please visit www.symanteccloud.com/troubleshooting for more details about this error message and instructions to resolve this issue.

    Anyone run into this? I haven't touched our server but it just popped up today. I checked mxtoolbox and got an all clear.

    "He's sulking in his tent like Achilles! It's the Iliad?...from Homer?! READ A BOOK!!" -Handy
  • Options
    DarkewolfeDarkewolfe Registered User regular
    Sounds like you're trying to use it as a relay and it's not configured to allow it.

    What is this I don't even.
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    "trying to use me as a relay" sounds like you're sending email to that server for a domain that the server isn't configured to accept. That suggests either misconfiguration on the receiving end (symantec), or a misconfiguration on the customer's DNS.

    Most likely explanation: the customer domain has their MX record set to symanteccloud, but they're not a paying symanteccloud customer.

    Alternative explanation: there's something wrong with your DNS. You're getting an out of date MX record.

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    SniperGuySniperGuy SniperGuyGaming Registered User regular
    So we did go ahead and get a wifi upgrade since our APs were all end of life anyway. When things work, they work significantly faster, which is great!

    Unfortunately, we seem to not be getting the speeds we need and large numbers of devices are just not getting any speed at all since not enough bandwidth is going around. Weirdly if you run a speedtest.com or google's speed test in my office, it gets full bandwidth. Devices on wireless are getting a sixth of that, if they're even able to get to those sites.

    The external IT company who did the wireless install is coming back again today for the third day in a row to finish up the few APs remaining and hopefully untangle the bandwidth issue, but boy this week has not been fun.

    I would be in the room with a bunch of switches doing file transfer speed checks right now but that's also a breakroom and people are in there eating lunch without masks on at the moment so I'm complaining on the internet instead.

    After all the testing I'm feeling like a device or a cable has gone screwy somewhere. I don't have enough network knowhow to track it down precisely but hopefully I can narrow things down enough that the external dudes can pinpoint and fix it. I should really work on getting a CCNA or something because a lot of networking stuff is just crazy nonsense to me.

  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    SniperGuy wrote: »
    Weirdly if you run a speedtest.com or google's speed test in my office, it gets full bandwidth.

    Does this mean

    "Weirdly if you run a speedtest.com or google's speed test while cabled to an Ethernet jack in my office, it gets full bandwidth."

    or

    "Weirdly if you run a speedtest.com or google's speed test while wirelessly associated to a WiFi AP in my office, it gets full bandwidth."

    ?

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    @SniperGuy

    When you feel out of your element, the most helpful things you can do are:

    - Isolate variables. Just try different combinations of machine/AP/network jack, etc. Does one laptop get better speed than another? Does one WiFi AP get better speed than another? Do you get good speed when there's only one client on the AP? If you unplug an AP and plug in a laptop to the same jack, does that laptop get good speed?

    - Take good notes. Error messages and datetimestamps are very important. Noting down which machines and users are affected is also helpful. Steps to reproduce the problem are also helpful.

    - Report facts first. Once you've reported facts, speculation is okay. Let's say I'm a senior tech swooping in to bail out a junior tech. My advantage as the senior tech is that I have a deeper well of diagnostic skills and experiences than you. Your advantage as the junior tech is that you're immediately on the ground, seeing exactly what's happening. That means that reporting the events as they happened is the best way you can show your value to me. (Your diagnostic instincts have the potential to be valuable, but they're less value than accurate factual reporting.)

    In your free time (lol) I recommend reading Eric S. Raymond's How To Ask Questions The Smart Way. Eric S. Raymond is a bit of an edgelord and his rhetorical style can be pretty caustic (especially since it was first written in the early 00s when forced edginess was the Internet zeitgeist) but despite his style, he makes very good points.

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    SniperGuySniperGuy SniperGuyGaming Registered User regular
    Feral wrote: »
    SniperGuy wrote: »
    Weirdly if you run a speedtest.com or google's speed test in my office, it gets full bandwidth.

    Does this mean

    "Weirdly if you run a speedtest.com or google's speed test while cabled to an Ethernet jack in my office, it gets full bandwidth."

    or

    "Weirdly if you run a speedtest.com or google's speed test while wirelessly associated to a WiFi AP in my office, it gets full bandwidth."

    ?

    Cabled to an ethernet jack. Getting 300~ which is what it should be on the wired desktop. The mac laptop I have in here is getting 90~ which seems quite a bit slower. Then my android phone is getting like 35~. Then there are student iPads unable to even get to websites. After all the testing today we're still pretty sure it'

    And yeah, I've been running all over the building testing out different stuff today and writing it all down for the external IT folk to use and hopefully they can fix this problem because boy I am tired of staying late and stuff not working.

    That seems like a very handy link though, especially as a person who teaches. Seems like some useful stuff to translate to students without quite so much of that caustic style.

  • Options
    MugsleyMugsley DelawareRegistered User regular
    FWIW I have Gigabit at home and none of the wireless devices can reach above 300Mbps. It's the nature of wifi.

  • Options
    ShadowfireShadowfire Vermont, in the middle of nowhereRegistered User regular
    Mugsley wrote: »
    FWIW I have Gigabit at home and none of the wireless devices can reach above 300Mbps. It's the nature of wifi.

    Without both a WiFi 6 capable router and devices you never will.

    Even with them you're very much up to the whims of technology and whatever gremlins are currently residing in your premises.

    WiiU: Windrunner ; Guild Wars 2: Shadowfire.3940 ; PSN: Bradcopter
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    I have a VLAN in my main datacenter called Isolation, which is for exactly what it says on the tin. Isolating VMs from all other environments.

    But every time I use it, I think of Trent Reznor. "You can have my isolation!"

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    FeldornFeldorn Mediocre Registered User regular
    What sort of App Virtualization systems have you all worked with, and would recommend?

    We currently use Citrix, but I have a project asking to scope out alternatives as well. I know that VMWare has Horizon, and MS as their own App-V as well.

  • Options
    DarkewolfeDarkewolfe Registered User regular
    What are you looking to accomplish? Remote worker support?

    What is this I don't even.
  • Options
    FeldornFeldorn Mediocre Registered User regular
    Essentially yes, but not quite. There are some internal web applications that we need to make available to external parties.

  • Options
    That_GuyThat_Guy I don't wanna be that guy Registered User regular
    The "I can't even be bothered" way would be to set them up with Teamviewer. The quick and dirty way would be to just open up an RDP port to a VM. A more secure way would be to set them up with a VPN back to your router and let them RDP in through that. I can't fucking stand citrix so I'd say steer clear of that.

  • Options
    FeldornFeldorn Mediocre Registered User regular
    The VPN way would be sensible, since we wouldn't then need all the servers related to an App V solution, and we are discussing that. Would also be nice to not need another NetScaler and all the RDS CALs that it would requrie.

    No idea if that will fly ultimately.

  • Options
    SiliconStewSiliconStew Registered User regular
    Feldorn wrote: »
    Essentially yes, but not quite. There are some internal web applications that we need to make available to external parties.

    If they're already web based, why would you even need app virtualization?

    Just remember that half the people you meet are below average intelligence.
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    edited September 2020
    Feldorn wrote: »
    Essentially yes, but not quite. There are some internal web applications that we need to make available to external parties.

    If it's just self-hosted web applications, the most elegant way to do this with with a reverse proxy. (Sometimes this is also called a web application proxy. It can be one of the features of a web application firewall.)

    If you need to provide external access to Windows desktop ("fat") applications as well, then yeah you need some kind of app virtualization.

    The big three are the ones you mentioned: Citrix XenApp, VMware Horizon, and Microsoft App-V. Having supported all three, I don't have a preference. Each one presents its own set of challenges. All three can be infuriating if you go into them blindly. Ultimately, the licensing costs and server footprints are lowest for App-V (in my experience) and highest for VMware Horizon. But YMMV. It mostly comes down to the feature set you need.

    I have no experience with any of the smaller-name alternatives like Cameyo. They might be worth investigating.

    I'm not a fan of using VPNs alone for this sort of thing. (Though wrapping up your app virtualization or remote desktop connections in a VPN is fine.)

    Feral on
    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    Pet peeve: when another tech starts their troubleshooting process with "what changed?"

    "Did something change on Server47? It's no longer accepting HTTPS requests."

    No, nothing changed. We froze it in amber a year ago.

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    FeldornFeldorn Mediocre Registered User regular
    Feldorn wrote: »
    Essentially yes, but not quite. There are some internal web applications that we need to make available to external parties.

    If they're already web based, why would you even need app virtualization?

    It's mostly for the authentication at this point. Due to compliance everyone must have unique credentials.

    Reverse proxy is great if we can allow anonymous access, but this shit is insecure so we need additional layers in front of it. We don't have any installed applications anymore, but still need to provide secure remote access to insecure internal resources.

  • Options
    LD50LD50 Registered User regular
    The vmware horizon solutions I have used are fucking gaaaaaaaaaarbage.

  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    Feldorn wrote: »
    Feldorn wrote: »
    Essentially yes, but not quite. There are some internal web applications that we need to make available to external parties.

    If they're already web based, why would you even need app virtualization?

    It's mostly for the authentication at this point. Due to compliance everyone must have unique credentials.

    Reverse proxy is great if we can allow anonymous access, but this shit is insecure so we need additional layers in front of it. We don't have any installed applications anymore, but still need to provide secure remote access to insecure internal resources.

    You can use a reverse proxy to add an authentication layer. That's a pretty common use case.

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    FeralFeral MEMETICHARIZARD interior crocodile alligator ⇔ ǝɹʇɐǝɥʇ ǝᴉʌoɯ ʇǝloɹʌǝɥɔ ɐ ǝʌᴉɹp ᴉRegistered User regular
    LD50 wrote: »
    The vmware horizon solutions I have used are fucking gaaaaaaaaaarbage.

    I personally think that Horizon is fine if you go into it with both eyes open and invest in it adequately. The problem is that too many companies buy into bullshit hype about desktop virtualization or app virtualization and either expect them to serve use cases where they're a poor fit, or they expect them to save money on hardware (they don't) and underinvest in them.

    They never, ever, ever save money compared to a traditional thick workstation setup.

    every person who doesn't like an acquired taste always seems to think everyone who likes it is faking it. it should be an official fallacy.

    the "no true scotch man" fallacy.
  • Options
    LD50LD50 Registered User regular
    Feral wrote: »
    LD50 wrote: »
    The vmware horizon solutions I have used are fucking gaaaaaaaaaarbage.

    I personally think that Horizon is fine if you go into it with both eyes open and invest in it adequately. The problem is that too many companies buy into bullshit hype about desktop virtualization or app virtualization and either expect them to serve use cases where they're a poor fit, or they expect them to save money on hardware (they don't) and underinvest in them.

    They never, ever, ever save money compared to a traditional thick workstation setup.

    Oh, I think they work fine when they are working properly.

    We have an affiliate rural hospital that has their entire infrastructure on horizon. We help them out with the IT side of things when they need it. The virtual desktops work fine when they aren't broken, but they have a tendency to crash without recovering, and the users that the desktops are provisioned for can't log in without calling their help desk and the session is restarted. I don't know why horizon can't figure out that the session has hung and terminate it automatically.

    Also, the management console is a web page written in flash.

  • Options
    FeldornFeldorn Mediocre Registered User regular
    Feral wrote: »
    Feldorn wrote: »
    Feldorn wrote: »
    Essentially yes, but not quite. There are some internal web applications that we need to make available to external parties.

    If they're already web based, why would you even need app virtualization?

    It's mostly for the authentication at this point. Due to compliance everyone must have unique credentials.

    Reverse proxy is great if we can allow anonymous access, but this shit is insecure so we need additional layers in front of it. We don't have any installed applications anymore, but still need to provide secure remote access to insecure internal resources.

    You can use a reverse proxy to add an authentication layer. That's a pretty common use case.

    I should research that a bit.

    We have a netscaler that can handle authentication, so perhaps we could use that and reverse proxy from there... I'll do that Monday though, not during whiskey time.

  • Options
    DarkewolfeDarkewolfe Registered User regular
    Feral wrote: »
    Pet peeve: when another tech starts their troubleshooting process with "what changed?"

    "Did something change on Server47? It's no longer accepting HTTPS requests."

    No, nothing changed. We froze it in amber a year ago.

    I work in a larger org, so I'm gonna challenge this somewhat. In our type of environment we can pull a list of, say, new services or patches that were applied in the last 24 hours, which IS a good place to start when things stop working.

    What is this I don't even.
  • Options
    schussschuss Registered User regular
    Feldorn wrote: »
    Feldorn wrote: »
    Essentially yes, but not quite. There are some internal web applications that we need to make available to external parties.

    If they're already web based, why would you even need app virtualization?

    It's mostly for the authentication at this point. Due to compliance everyone must have unique credentials.

    Reverse proxy is great if we can allow anonymous access, but this shit is insecure so we need additional layers in front of it. We don't have any installed applications anymore, but still need to provide secure remote access to insecure internal resources.

    Is there a reason you aren't taking a more zero trust approach and building an AD/ping/whatever auth wrapper for apps so you can expose them more safely? Having insecure apps out there with security via tunnel is just a breach waiting to happen.
    If it's "we don't have time", that's fine, but it may be more sustainable than standing up a new virtualization stack, especially as you may be able to pull some cost savings out of a light refactor.

  • Options
    FeldornFeldorn Mediocre Registered User regular
    I’m not sure we have the time or expertise for that.

    This also isn’t something we can refactor. The main offender here is on the docket to be replaced, but that is probably a couple years out.

  • Options
    electricitylikesmeelectricitylikesme Registered User regular
    InfluxDB is like...shockingly bad. I now have no questions as to how Prometheus took the world by storm the way it did, because things you can do easily in Prometheus are somewhere between stupid hard, inaccurate or literally impossible in InfluxQL when your querying it.

    Like, wow. My entire company's metrics gathering seems to turn on "massive limitations of Influx in querying data" rather then any other actual restriction.

This discussion has been closed.