Bestes Template für Binäre Optionen - Mal Ehrlich ...

Powerful Binary Options 5M Template + Free Download Upto 90% Winnings

Powerful Binary Options 5M Template + Free Download Upto 90% Winnings submitted by BinaryOptionsForward to BinaryTrain [link] [comments]

The Perfect Indikators for Trading Binary Options in Metatrader as Templates

submitted by janni325 to ethtrader [link] [comments]

My brother and I just released alpha of our open source declarative programming language (implemented in Haskell!) for writing modern web apps (i.e. React/Node/Prisma) with no boilerplate. We are still learning Haskell and would love to get your feedback / advice!

Web page: https://wasp-lang.dev Docs: https://wasp-lang.dev/docs Github repo: https://github.com/wasp-lang/wasp
We have been playing with Haskell for years now, but always on the side, and this is the first bigger project we are doing in Haskell (we thought Haskell would be a good fit for compiler), so we encountered interesting challenges and we are learning a lot as we are solving them.
We are mostly sticking to “Boring Haskell”, due to us still learning some of the more complex concepts, but also in order to enable less experienced Haskellers to potentially contribute to the codebase.
Some of the interesting Haskell-related challenges we encountered so far:
Some bigger Haskell-related things on our roadmap:
We are looking for alpha testers, contributors, feedback, so let us know if you would like to participate!
submitted by Martinsos to haskell [link] [comments]

Over-Optimizing for Performance

Recently on the csharp subreddit, the post C# 9.0 records: immutable classes linked to a surprisingly controversial article discussing how C# 9.0's records are, underneath it all, immutable classes. The comments are full of back-&-forth over whether one should use records for ease or structs for performance. The pro-struct argument revolved around the belief that performance should always be a developer's #1 priority, and anything less was the realm of the laggard.
Here is a real-world example that shows with stark clarity why that kind of thinking is wrong.
Consider the following scenario:

1

You're working on a game with dozens, maybe hundreds of people on the team; you don't know because when you were cross with facilities about them removing all the fluorescents, you got accused of being against the new energy saving initiative. Now you swim in a malevolent ocean of darkness that on some very late nights alone in the office, you swear is actively trying to consume you.
 

2

The team that preceded you inherited an engine that is older than OOP, when source repositories were stacks of 8-inch floppies, and it looked as if Jefferson Starship was going to take over the world. One year ago they bequeathed upon the company this nightmare of broken, undocumented GOTO spaghetti & anti-patterns. You're convinced this was their sadistic revenge for all getting fired post-acquisition.
 

3

Management denied your request to get headcount for an additional technical artist, but helpfully supplied you with an overly nervous intern. After several weeks working alongside them, you're beginning to suspect they're pursuing something other than a liberal arts degree.
 

4

Despite the many getting started guides you spent countless evenings writing, the endless brownbags nobody attended, and the daily dozen emails you forward to oppressively inquisitive artists comprised of a single passive-aggressive sentence suggesting they scroll down to the part that begins FW: FW: FW: FW: FW: FW: RE: WE BROKE TOOL NEED WORKAROUND ASAP ...
 
...yes, despite all of that, the engineering team still spent days tracking down why the game kept crashing with Error 107221: У вас ошибка after re-re-re-re-re-throwing an ex_exception when it couldn't (and should never even try to) load a 16K-textured floor mat.
 

5

Despite your many attempts to politely excuse yourself, one blissfully unaware artist exhausts 48 minutes of your lunch break explaining how the Pitchfork review for the latest "dope slab" of this TikTok-Instagram-naphouse artist you never heard of was just sooooo unfair.
 
And then in their hurry to finish up & catch the 2:30 PM bus home, they forget to toggle Compress To CXIFF (Custom Extended Interchange File Format), set the Compression slider 5/6ths of the way between -3 & -2, look to their left, look to their right, click Export As .MA 0.9.3alpha7, and make absolutely, positively, 100% SURE not to be working in prod. And THAT is how the game explodicated.
 

6

You know better than anyone the intermediate file format the main game loop passes to Game.dll, memory mapping it as a reverse top-middle Endian binary structure.
 
You know for 381 of the parameter fields what their 2-7 character names probably mean.
 
YOU know which 147 fields always have to be included, but with a null value, and that the field ah_xlut must ALWAYS be set to 0 unless it's Thursday, in which case that blackbox from hell requires its internal string equivalent: TRUE.
 
YOU know that the two tech artists & one rapidly aging intern that report to you would totally overhaul tooling so artists would never "happen" again, but there just aren't enough winters, springs, summers, falls, July 4ths, Christmas breaks, Presidents Days, and wedding anniversaries in a year to properly do so.
 

7

If you could just find the time between morning standups, after lunch standups, watersprint post-mortems, Milbert's daily wasting of an hour at your desk trying to convince you engineering should just rebuild the engine from the ground up in JavaScript & React, & HR's mandatory EKG Monitor job satisfaction surveys, you might be able to get at least some desperately-needed tooling done.
 
And so somehow you do. A blurry evening or two here. A 3:00 AM there. Sometimes just a solitary lunch hour.
 
Your dog no longer recognizes you.
 
You miss your wife calling to say she's finally cleaning out the hall closet and if you want to keep this box of old cards & something in plastic that says Underground Sea Beta 9.8 Grade, you better call her back immediately.
 
And your Aunt Midge, who doesn't understand how SMS works, bombards you one evening:
your father is...
no longer with us...
they found him...
1 week ago...
in an abandoned Piggly Wiggly...
by an old culvert...
split up...
he was then...
laid down to rest...
sent to St. Peter's...
and your father...
he's in a better place now...
don't worry...
it's totally okay...
we decided we will all go...
up to the mountain
 
You call your sister in a panic and, after a tidal wave of confusion & soul-rending anxiety, learn it was just Hoboken Wireless sending the messages out of order. This causes you to rapidly cycle.
 

8

On your bipolar's upswing, you find yourself more productive than you've ever been. Your mind is aglow with whirling, transient nodes of thought careening through a cosmic vapor of invention. It's like your brain is on 200mg of pure grade Adderall.
 
Your fingers ablaze with records, clean inheritance, beautiful pattern matching, bountiful expression syntax, aircraft carriers of green text that generate the most outstanding CHM for an internal tool the world has ever seen. Readable. PERFECTLY SOLID.
 
After much effort, you gaze upon the completed GUI of your magnum opus with the kind of pride you imagine one would feel if they hadn't missed the birth of their son. Clean, customer-grade WPF; tooltips for every control; sanity checks left & right; support for plugins & light scripting. It's even integrated with source control!
 
THOSE GODDAMNED ARTISTS CAN'T FAIL. YOUR PIPELINE TOOL WON'T LET THEM.
 
All they have to do is drag content into the application window, select an options template or use the one your tool suggests after content analysis, change a few options, click Export, and wait for 3-5 minutes to generate Game.dll-compatible binary.
 
Your optimism shines through the commit summary, your test plan giddy & carefree. With great anticipation, you await code review.
 

9

A week goes by. Then two. Then three. Nothing. The repeated pinging of engineers, unanswered.
 
Two months in you've begun to lose hope. Three months, the pangs of defeat. Four months, you write a blog post about how fatalism isn't an emotion or outlook, but the TRANSCENDENCE of their sum. Two years pass by. You are become apathy, destroyer of wills.
 

10

December 23rd, 2022: the annual Winter Holidays 2-hour work event. The bar is open, the Kokanee & Schmidt's flowing (max: 2 drink tickets). The mood a year-high ambivalent; the social distancing: acceptable. They even have Pabst Blue Ribbon, a beer so good it won an award once.
 
Standing beside you are your direct reports, Dave "Macroman" Thorgletop and wide-eyed The Intern, the 3 of you forming a triumvirate of who gives a shit. Dave is droning on & on about a recent family trip to Myrtle Beach. You pick up something something "can you believe that's when my daughter Beth scooped up a dead jellyfish? Ain't that something? A dead jellyfish," and "they even had a Ron Jons!"
 
You barely hear him, lost as you are in thought: "I wish I had 2 days of vacation." You stare down ruefully at your tallboy.
 
From the corner of your eye you spot Milbert, index finger pointed upward, face a look of pure excitement.
 
"Did I tell you about my OpenWinamp project? It's up on SourceForge", he says as he strides over. It's unsettling how fast this man is.
 
"JAVASCRIPT IS JUST A SUBSET OF JAVA!" you yell behind you, tossing the words at him like a German potato masher as you power walk away. It does its job, stopping Milbert dead in his tracks.
 
Dave snickers. The Intern keeps staring wide-eyed. You position yourself somewhat close to the studio's 3 young receptionists, hoping they serve as a kind of ritual circle of protection.
 
It works... kind of. Milbert is now standing uncomfortably close to The Intern, Dave nowhere to be seen.
 
From across the room you distinctly hear "Think about it, the 1st-person UI could be Lua-driven Electron."
 
The Intern clearly understands that words are being spoken to them, but does not comprehend their meaning.
 
You briefly feel sorry for the sacrificial lamb.
 

11

You slide across the wall, putting even more distance between you & boredom made man. That's when you spot him, arrogantly aloof in the corner: Glen Glengerry. Core engineering's most senior developer.
 
Working his way up from a 16-year old game tester making $4.35 an hour plus free Dr. Shasta, to pulling in a cool $120K just 27-years later, plus benefits & Topo Chicos. His coding style guides catechism, his Slack pronouncements ex cathedra; he might as well be CTO.
 
You feel lucky your team is embedded with the artists. You may have sat through their meetings wondering why the hell you should care about color theory, artistic consistency, & debates about whether HSL or CMYK was the superior color space (spoiler: it's HSL), you were independent and to them, a fucking code wizard, man.
 
And there he stands, this pseudo-legend, so close you could throw a stapler at him. Thinning grey-blonde tendrils hanging down from his CodeWarrior hat, white tee with This Guy VIMs on the back, tucked into light blue jeans. He's staring out into the lobby at everything and yet... nothing all at.
 

12

Maybe it's the 4.8% ABV. Maybe it's the years of crushing down anger into a singularity, waiting for it to undergo rapid fiery expansion, a Big Bang of righteous fury. Maybe it's those sandals with white socks. Maybe it's all three. But whatever it is, it's as if God himself compels you to march over & give him a piece of your mind, seniority be damned.
 
"Listen, you big dumb bastard..."
 
That... is maybe a little too aggressive. But Glen Glengerry barely reacts. Pulling a flask out of his back pocket, he doesn't look over as he passes it to you.
 
Ugh. Apple Pucker.
 

13

"I thought bringing in your own alcohol was against company policy", wiping sticky green sludge from your lips. He turns with a look of pure disdain & snorts.
 
"You think they're going to tell ME what I can & can't bring in?" He grabs the flask back, taking a big swig.
 
For what feels like an eternity, you both stand in silence. You swallow, speaking softly. "None of you even looked at my code. I worked very, very hard on that. My performance review for that year simply read 'recommend performance improvement plan." The words need no further context.
 
"I know", Glen² replies. "That was me."
 

14

Now you're not a weak man, and maybe in some other circumstance you would have punched him in the goddamn lip. But you feel nothing, just a hollowness inside. "Why?", you ask, wondering if the answer would even matter.
 
"Because you don't use Bulgarian notation. Because your method names aren't lower camel case. Because good code doesn't require comments. Because you use classes & records over more performant structs, pointlessly burdening the heapstack. BECAUSE. YOUR CODE. IS. SHIT."
 
You clinch your fists so tightly the knuckles whiten.
 

15

He looks away from you, taking another sip of green goo. "You're not a coder. You're an artist masquerading as one" he speaks, as if it were fact.
 
The only thing artistic about you is the ability to create user-friendly internal tooling using nothing but a UI framework, broken down garbage nobody wants to touch, & sheer willpower. If your son's life depended on you getting accepted into art instruction school, you couldn't even draw a turtle.
 
He doesn't pause. "I'll champion ruthless micro-optimization until the day I die. But buddy, I'm going to let you in on a little secret: you aren't here to improve workflow. You're here to LOOK like you're doing something NOBODY else can."
 
He goes on. "What do you think those artists are going to do when they have to stare at a progress bar for 4, 5 minutes? They're going to complain your tool is slow."
 
"Sure, it may take them 20, 30 minutes to do it the old way, there'll be an error, and either they'll stare at it for 30 minutes before adding that missing semi-colon or they'll come get you. And you'll fix it. And 1 week later, they won't remember how. And you'll stay employed. And every. Body. Wins."
 

16

A little bit of the pride, the caring, wells back up inside from somewhere long forgotten.
 
"You don't think we should care about rapid application development & KISS, quickly getting things out that help our team, instead devoting ourselves to shaving off ticks here & there? What do you think artists are going to do with those 4 minutes you talk about?
 
You don't stop. "I'll tell you what they'll do. They'll 9GAG for 20 minutes straight. They'll listen to podcasts about dialectical materialism vis-a-vis the neo-feudalism that is a natural extension of the modern world's capitalist prison. They'll Reddit."
 
His silence gives you the bravery to push the limits.
 
"Christ, man. Are you only in it for the $120K..."
 
He corrects you: "...$123K."
 
"...only in it for the $123K/year? The free snacks from the microkitchen? The adulation? Have you no sense of comraderie?? No desire to push us to something better?! No integrity?!!!"
 
His eyes sharply narrow, face creases in anger. You clearly have overstepped your bounds.
 

17

"You think I don't have integrity? No sense of teamwork? I'm only in it for the cold cash? You think I don't care about you all?", he roars.
 
A light volley of small green flecks land on your face.
 
"Why do you think they made a 16-year old tester the lead developer of a 1993 Doom clone?! Because my code was clean & painless to work with?! Because I made coding look easy?! No! IT WAS BECAUSE I WAS A GOD TO THEM.
 
And from a God, a PANTHEON. We built monuments to over-engineering! We crafted that of 7 weeks onboarding, that of immortal bugs, demonic hosts spawned by legion from the very loins of a fix. It took 2 years before a developer could BEGIN to feel confident they knew what they were doing. And by that time, they were one of US!
 
You think the team we laid off November '19 was fired because they were bad at their jobs? NO! It was because they worked themselves out of one. They didn't leave us a broken pipeline. They left an internal Wiki, a wealth of tools & example projects, and a completely transparent code base.
 
We couldn't have THAT, now could we? No, we couldn't. So we got rid of it. ALL OF IT. Poof. Gone. Just like that. Before anyone even knew a THING."
 
He leans forward, so close his psoriasis almost touches yours.  
With an intensity that borders on frightening, he whispers "You think they left us Game.dll? I fucking *MADE** Game.dll."*
 
The words hit hard like a freight train.
 

18

And without another word, he turns & leaves. You're left there, alone, coworkers milling about, with only one thought.
     
Were one to get a hobby, should it be cocaine?
 

In Conclusion

It's these kinds of situations that make me believe there are far more important considerations than a ruthless dedication to performance, even in the game industry as my real-world scenario so clearly demonstrates.
 
Like, records are cool & shit.
submitted by form_d_k to shittyprogramming [link] [comments]

Pi-hole for Windows, now even easier to set up

Pi-hole for Windows, now even easier to set up
PH4WSL1.cmd (Pi-hole for Windows)
This script performs an automated install of Pi-hole 5 on Windows 10 (version 1809 and newer) / Windows Server 2019 (Standard or Core). No Linux, virtualization, or container expertise required.
If you have an issue installing PH4WSL1.cmd please don't bother the Pi-hole developers. Your best option is to open an issue on the GitHub page.
Copy PH4WSL1.cmd to your computer and "Run as Administrator"
If you don't have Windows up to date, Pi-hole installer will throw an "Unsupported OS" error midway through the installation, see below for required update KB. Uninstall Pi-hole, update your machine and try again
  • Enables WSL1 and downloads Ubuntu 20.04 from Microsoft
  • Installs and Configures distro, downloads and executes Pi-hole installer
  • Creates a /etc/pihole/setupVars.conf file for an automated install
  • Adds exceptions to Windows Firewall for DNS and Pi-hole admin page
  • Includes a Scheduled Task Pi-hole_Task.cmd to allow auto-start at boot, before logon. Edit the task, under General tab check Run whether user is logged on or not and Hidden and (if needed) in the Conditions tab uncheck Start the task only if the computer is on AC power
Requires the recent (August/Sept 2020) WSL update for Windows 10:
  • 1809 - KB4571748
  • 1909 - KB4566116
  • 2004 - KB4571756
Additional Info:
  • DHCP Server is disabled
  • To reset or reconfigure Pi-Hole, run Pi-hole_Reconfigure.cmd in the Pi-hole install folder
  • To uninstall Pi-Hole, run Pi-hole_Uninstall.cmd in the Pi-hole install folder
Below is a console dump and (trimmed) screenshot of the install procedure:
Pi-hole for WSL --------------- Location of 'Pi-hole' folder [Default = C:\Program Files] Response: Pi-hole listener IP and subnet in CIDR format, ie: 192.168.1.99/24 Response: 10.74.0.253/24 Port for Pi-hole. Port 80 is good if you don't have a webserver, or hit enter for default [8880]: Response: 80 Install to: C:\Program Files\Pi-hole Network: 10.74.0.253/24 Port: 80 Fetching LxRunOffline... Installing distro... Configuring distro, this can take a few minutes... Extracting templates from packages: 100% [✓] Root user check .;;,. .ccccc:,. :cccclll:. ..,, :ccccclll. ;ooodc 'ccll:;ll .oooodc .;cll.;;looo:. .. ','. .',,,,,,'. .',,,,,,,,,,. .',,,,,,,,,,,,.... ....''',,,,,,,'....... ......... .... ......... .......... .......... .......... .......... ......... .... ......... ........,,,,,,,'...... ....',,,,,,,,,,,,. .',,,,,,,,,'. .',,,,,,'. ..'''. [✓] Update local cache of available packages [i] Existing PHP installation detected : PHP version 7.4.3 [i] Performing unattended setup, no whiptail dialogs will be displayed [✓] Disk space check [✗] Checking apt-get for upgraded packages Kernel update detected. If the install fails, please reboot and try again [i] Installer Dependency checks... [✓] Checking for dhcpcd5 [✓] Checking for git [✓] Checking for iproute2 [✓] Checking for whiptail [✓] Checking for dnsutils [✓] Supported OS detected [i] SELinux not detected [✗] Check for existing repository in /etc/.pihole [i] Clone https://github.com/pi-hole/pi-hole.git into /etc/.pihole...HEAD is now at 6b536b7 Merge pull request #3564 from pi-hole/release/v5.1.2 [✓] Clone https://github.com/pi-hole/pi-hole.git into /etc/.pihole [✗] Check for existing repository in /vawww/html/admin [i] Clone https://github.com/pi-hole/AdminLTE.git into /vawww/html/admin...HEAD is now at a03d1bd Merge pull request #1498 from pi-hole/release/v5.1.1 [✓] Clone https://github.com/pi-hole/AdminLTE.git into /vawww/html/admin [✓] Enabling lighttpd service to start on reboot... [✓] Creating user 'pihole' [i] FTL Checks... [✓] Detected x86_64 architecture [i] Checking for existing FTL binary... [✓] Downloading and Installing FTL [✓] Installing scripts from /etc/.pihole [i] Installing configs from /etc/.pihole... [✓] No dnsmasq.conf found... restoring default dnsmasq.conf... [✓] Copying 01-pihole.conf to /etc/dnsmasq.d/01-pihole.conf [✓] Preparing new gravity database [i] Target: https://raw.githubusercontent.com/StevenBlack/hosts/mastehosts [✓] Status: Retrieval successful [i] Received 56949 domains [i] Target: https://mirror1.malwaredomains.com/files/justdomains [✓] Status: Retrieval successful [i] Received 26854 domains [✓] DNS service is running [✓] Pi-hole blocking is Enabled [i] Web Interface password: EPDvXZPh [i] This can be changed using 'pihole -a -p' [i] View the web interface at http://pi.hole/admin or http://10.74.0.253/admin [i] You may now configure your devices to use the Pi-hole as their DNS server [i] Pi-hole DNS (IPv4): 10.74.0.253 [i] If you set a new IP address, please restart the server running the Pi-hole [i] The install log is located at: /etc/pihole/install.log Installation Complete! Web Interface Admin Enter New Password (Blank for no password): [✓] Password Removed SUCCESS: The scheduled task "Pi-hole for WSL" has successfully been created. SUCCESS: Attempted to run the scheduled task "Pi-hole for WSL". Wait for Pi-hole launcher window to close and Press any key to continue . . . Pi-hole for WSL Installed to C:\Program Files\Pi-hole 
Expected installer output (truncated screen shot)
Pi-hole-Reconfigure.cmd
Pi-hole running alongside your Windows apps. It can run on a Windows PC with just one CPU core and 1GB RAM.
submitted by desktopecho to pihole [link] [comments]

Zabbix 5.2 is released! Some more details.

The new major release comes with an impressive list of new features, improvements and out of the box integrations:
Zabbix offers out of the box official integrations with:
Other major improvements:
Official packages are available for:
One-click deployment is available for the following cloud platforms:
and much more!
Read release notes for a complete list of improvements: https://www.zabbix.com/rn/rn5.2.0
In order to upgrade you just need to download and install new binaries (server, proxy and Web UI). When you start Zabbix Server it will automatically upgrade your database. Zabbix agents are backward compatible therefore no need to install new agents, you can do it anytime later if needed.
submitted by alexvl to zabbix [link] [comments]

[Update] CCSupport 1.3 - Module Providers

CCSupport 1.3 is out now on https://opa334.github.io (also submitted to BigBoss) and adds a new feature that module developers can utilize.
Previously CCSupport only loaded regular third party modules. Every single CC module added would need it's own bundle / binary. This made certain things, such as giving the user an option to specify how many modules of a certain module he wants, impossible (unless you planned on doing some crazy shenanigans like FlipConvert).
Well, long story short, this updates adresses that limitation by adding an additional API that allows developers to create module providers. A module provider can provide an arbitary amount of modules, here is a video of my example provider in action (note that this specific provider provides the same module multiple times, but this is not required at all, you could make a provider provide a 2x2 app launcher module, a network module and some random switch if you wanted).
For developers interested, module providers are documented here and a new theos template for providers has been released here.
Have fun and follow me on twitter!
submitted by opa334 to jailbreak [link] [comments]

A Guide to using the Steam Controller in 2020 (Guide and support thread).

A Guide to using the Steam Controller in 2020 (Guide and support thread).
Hey there! My name is Spork, and I'll be guiding you on how to use (or keep using) the Steam Controller for Rocket League. After Rocket League was taken off the steam store, I figured old or new players could use a guide on how to use the SC (steam controller) on both PC launchers.
Even if you don't use the steam controller, I encourage you to read and upvote this post. ***MODS, STAY AWAY FROM THE DELETE BUTTON FOR ONE SECOND***. I am not asking for karma or internet clout. I ask people to upvote this post so that people looking for this guide on google can see it and make their day easier. *This guide also includes details that can help your controller problems when transitioning to the EG launcher (i'm looking at you, Nintendo Switch Pro Controller).\*
I have around 300 hours of use with this thing, and I'd like to say I'm pretty familiar with how it works and I want to help you out.
There are 2 guides for this controller, one for the Steam platform and one for the Epic Games platform.

It's Glorious.

Steam Launcher Guide

If you bought the game previously on steam, great! Your controller will be much more powerful and easy to use. Here's a brief guide on customizing your steam controller for Rocket League:
Steam Controller Configuration VS In Game Bindings
Using a generic gamepad configuration
One of the benefits to using the steam controller is that it works with virtually EVERY game due to what I like to call Dynamic Firmware Customization. Using the controller configuration tool in Big Picture mode, you can change the bindings that the controller will emulate. For instance, if you want to play a game that doesn't support controllers, you can change your bindings to emulate a keyboard or other device. The best part is, Steam will automatically update the firmware of your controller to match these bindings, which means there is no external communication through steam. Essentially, your computer thinks that the steam controller is actually a keyboard and therefore communicates with your game directly. This can help reduce latency with the game that you're playing. However, Steam must be open for this to work, otherwise it acts as an HID device with very basic keyboard/mouse inputs called "Lizard Mode". (See https://www.pcgamingwiki.com/wiki/Controller:Steam_Controller for details). There are some technical workarounds, but I don't bother with them.
So, when you launch up rocket league, your steam controller will load up a default configuration (most likely XInput) to act as a generic gamepad. (There is one exception to this, but I'll cover that later). By default, the Grip buttons are set to A, X, B, or Y. Personally, I like to set them to bumpers for air roll and powerslide. Here is an example of my configuration:
Don't worry about anything that looks confusing like \"Action sets\". These settings can be useful only when you are very familiar with your controller.
You'll see that I've set each button on the controller to act as a normal gamepad. Each button is bound to a controller output instead of an In-Game action. The steam controller will act as a normal gamepad, meaning that you can change your actual in-game actions in the Rocket League settings.
Additionally, you can upload your configs to share with others. If you want to browse these configs, just click X on this screen to see configurations the community is using.
You can customize your controller in this way by navigating to this screen:

Don't worry about Controller Options, they are on default settings. The real changes you can make in that menu are related to Steam In-Game Input mentioned later.
The best part is, you can customize other controllers this way as well (but it isn't optimized with real-time firmware changes like the steam controller). To do this, simply navigate to your settings on the default Big Picture homepage and go to Controller settings. From there you can enable "Xbox Configuration Support", "PS4 Config Support", etc.
It's also possible to change these settings in game using the Big Picture steam overlay. Just click the steam button (also known as the guide button) to bring up the steam overlay. You will see a controller configuration option in the menu. (Note: you must launch your game from big picture mode for this to work).
Using Steam Input
Rocket League also supports something they call "Steam Input". This means Rocket League allows you to configure your in-game controls using the steam configuration tool. When this setting is enabled, your configuration will look like this:

Notice now that the controller config changes actual in-game actions rather than a controller input.
This allows for fuller customization of your in game controls using actual in game actions instead of controller bindings. For instance, here is a config I made experimenting with Gyro controls for mid air movement:

This configuration allows me to move my car in midair by physically rotating and moving my controller, without pressing any buttons.
If you want to change your config to or back from Steam Input, just follow these steps:
1: Click X on the config screen to browse configs.
2: Select the "(LEGACY) Official Psyonix Bindings" for normal gamepad emulation, or select "Rocket League Standard Controls" for steam input. If you have a previous configuration using steam input or regular input saved to your account, you can switch to these configurations as well.
https://preview.redd.it/rssop1diyjq51.png?width=1920&format=png&auto=webp&s=6a436748fde0deb49dd8f5502c8e6a2342a380d6
There is one downside to using steam input: latency and lag. Because your game now has to communicate with steam to get your inputs (instead of communicating directly with your controller) it takes more time to register an input. Additionally, the game will require more resources to communicate with steam in the first place and can make your game slower. I recommend using a generic gamepad configuration for this reason, but if it doesn't bother you, go for it. If you wish to completely disable steam input, navigate back to the controller setting screen I showed before:

https://preview.redd.it/8m7zh7cgzjq51.png?width=1920&format=png&auto=webp&s=cd8231b7a9b917588565691f335fdc63012d510e
Navigate to the setting highlighted and change it to "Forced Off".
https://preview.redd.it/3fsjo9n6rkq51.png?width=1920&format=png&auto=webp&s=31eeee245883b675688b4c11785188f6098c58f1
This will make it so only generic gamepad configurations work with rocket league. If you need more help with this topic, psyonix has a support article here: https://support.rocketleague.com/hc/en-us/articles/360015501594-Steam-Controller-Configuration-Beta-
That's it to using the steam controller on the steam version of Rocket League! If you want to know more about the steam controller, there are many guides on YouTube to help you customize your controller further.
EDIT: Something important to note that was brought up by u/TheLadForTheJob is that using the big picture steam overlay can have an impact on performance in you game. I recommend that you launch the game in big picture mode when customizing your controller, but outside of that launch the game OUTSIDE of big picture mode for a boost in performance.

Epic Games Launcher Guide

Now, obviously, the Epic Games version of Rocket League does not recognize the steam controller as a gamepad. It's hard to set up the steam controller to work with games that use launchers since the application launch script is different than, say, a DRM-Free application. It is possible to use the steam controller for the EG version of Rocket League, but not in the way you think.
I tested several different methods with the EG version of Rocket League and these were the best solutions I could find.
Edit: The best option is to download and use SteamGridDB Manager, an application that automatically adds games from your other launchers into your steam library and launches them with the right scripts. I've tested the most recent build of this app (0.4.2) with Rocket League and it works flawlessly. It will add a Non-Steam game to your library that will launch rocket league with the right scripts so you have access to online play and the EG online services. You can follow the guide about using a generic gamepad configuration in the steam guide above to customize your controls after you add the EG version of rocket league with SteamGridDB Manager. The launch process is a lot simpler than the solution below. If you want to know more about how to use this app, visit https://www.steamgriddb.com/manager to learn more. However, if you don't want to use this application, feel free to follow the guide below:
ADD THE EPIC LAUNCHER AS A NON-STEAM GAME:
Many people when trying to use the steam controller with games from other launchers like epic games, they'll navigate to the .exe file of their game and add it as a non-steam game to their library. This works (at least for me) about 1/5th of the time. It flat out doesn't work with Epic; if you try to launch the RocketLeague.exe file that epic installed, it won't connect you to the online service.
What you'll want to do is add the epic games launcher to your steam library as a non steam game. This allows you to run the epic games launcher with predetermined configurations using steam. When you launch it, it'll tell the controller to switch configurations and act as a gamepad. Then, when you launch Rocket League from the Epic Games launcher, it will be recognized as a controller.
The steps are as follows:
1: Navigate to the bottom left of the Steam window and click ADD A GAME, then click Add a Non-Steam Game...

https://preview.redd.it/4f7psyu2lkq51.png?width=310&format=png&auto=webp&s=122911b18df1d45f114f259f53750f7601da8d32
Once the window is open, select the Epic Games Launcher:

If the Epic Games Launcher does not show up in this list, click BROWSE... and navigate to the .exe file of the launcher (EpicGamesLauncher.exe). A common filepath for the launcher is C:\Program Files (x86)\Epic Games\Launcher\Engine\Binaries\Win32 or Win64.
Once you've added the Epic Games Launcher to your steam library, click on it and select Controller Configuration. (This setting might not appear if your steam controller is not on. If the controller is on and you still don't see it, launch in big picture mode and follow the instructions found in the Steam Launcher portion of this guide for configuring your controller.

https://preview.redd.it/p8ydzbb2nkq51.png?width=1074&format=png&auto=webp&s=f16db4941a1c819cab40bb5795c66bfd84c9564b
You should now see this screen in a window:

https://preview.redd.it/we60yb2fnkq51.png?width=1270&format=png&auto=webp&s=6b9a7e6e9bdbb55c4991f3145422850616963921
Make sure to click "BROWSE CONFIGS", then TEMPLATES, then scroll until you see a configuration that is named "GAMEPAD". Apply that configuration so that it looks similar to the picture above. From here, you can change your controller bindings (ex in the picture I changed the grip buttons to be each bumpers). Make sure to export your config and save it as a personal binding in case something happens to it.
Click DONE to save your config and exit this window. Once you have done this, you will be able to play Rocket League using the steam controller on the Epic Games Launcher!
Make sure to launch the epic games launcher from Steam:

If you don't launch it from steam, the controller config may not work.
Then, launch rocket league from the Epic Games store with your mouse and/or keyboard. Once you load into the game, the steam controller should be recognized as a gamepad instead of a mouse and keyboard!
While I've made this guide specific to the steam controller, This guide may also work with other gamepads that the Epic Games Launcher does not recognize. If it can be recognized by Steam and is configurable (ex: the Nintendo Switch Pro Controller), these steps can be repeated to get your gamepad working with the EG version of Rocket League. I will try to post a Nintendo Switch Pro Controller-specific guide to help those transitioning to the EG launcher or brand new players that want to play on a computer instead.
Edit: u/TheLadForTheJob posted a comment detailing how you can shorten the launch process with specific launching scripts attached so it launches rocket league automatically. This is similar to the SteamGridDB Manager solution, but without the actual application. I encourage you to check out his comment: https://www.reddit.com/RocketLeague/comments/j3kk85/a_guide_to_using_the_steam_controller_in_2020/g7f1pbz?utm_source=share&utm_medium=web2x&context=3

Conclusion

I hope you found this guide helpful! If you have any questions, feel free to post a comment and we will try to help you out. I've tried to make this guide as comprehensive as possible, but if you think I missed something or got something wrong, post it in the comments so I can edit this guide.
Feel free to share your steam controller configurations as well to help new players!
Your friend,
SPORK
submitted by TheQuintessent to RocketLeague [link] [comments]

Best Setup for Debugging/Reversing

Hi, i‘m looking for a Setup for Debugging/Reversing Binaries on many different Operating Systems. Important is, that it can create new VM Instances of OS Templates as quick as possible and that the system is able to display GUI Content fluently. (So Cloud VMs aren‘t the best option.) Also it would be nice if it had enough power to run a pfsense instance for network monitoring at the same time.
Until now i found two Options:
  1. A PC Build with a Ryzen 3900x and a quick nvme ssd running Vmware player. This should have enough performance for my task, but is quite expensive, especially because I would have to buy a seperate graphics card in this times 😅
  2. Buying old server hardware. Where I live you can get a used Poweredge R710 with 2 Xeon X5650 (2 x 6/12) for 200 bucks. This sounds quite interesting, but will it fit my requirements? Running ESXi it would maybe has problems displaying the GUI fluently, especially because it has no dedicated graphics card and the enterprise cards are to expensive for me.
Which Option would be the best? Maybe you have other suggestions.
Thank you for your help.
submitted by che_spl0it to virtualization [link] [comments]

NASPi: a Raspberry Pi Server

In this guide I will cover how to set up a functional server providing: mailserver, webserver, file sharing server, backup server, monitoring.
For this project a dynamic domain name is also needed. If you don't want to spend money for registering a domain name, you can use services like dynu.com, or duckdns.org. Between the two, I prefer dynu.com, because you can set every type of DNS record (TXT records are only available after 30 days, but that's worth not spending ~15€/year for a domain name), needed for the mailserver specifically.
Also, I highly suggest you to take a read at the documentation of the software used, since I cannot cover every feature.

Hardware


Software

(minor utilities not included)

Guide

First thing first we need to flash the OS to the SD card. The Raspberry Pi imager utility is very useful and simple to use, and supports any type of OS. You can download it from the Raspberry Pi download page. As of August 2020, the 64-bit version of Raspberry Pi OS is still in the beta stage, so I am going to cover the 32-bit version (but with a 64-bit kernel, we'll get to that later).
Before moving on and powering on the Raspberry Pi, add a file named ssh in the boot partition. Doing so will enable the SSH interface (disabled by default). We can now insert the SD card into the Raspberry Pi.
Once powered on, we need to attach it to the LAN, via an Ethernet cable. Once done, find the IP address of your Raspberry Pi within your LAN. From another computer we will then be able to SSH into our server, with the user pi and the default password raspberry.

raspi-config

Using this utility, we will set a few things. First of all, set a new password for the pi user, using the first entry. Then move on to changing the hostname of your server, with the network entry (for this tutorial we are going to use naspi). Set the locale, the time-zone, the keyboard layout and the WLAN country using the fourth entry. At last, enable SSH by default with the fifth entry.

64-bit kernel

As previously stated, we are going to take advantage of the 64-bit processor the Raspberry Pi 4 has, even with a 32-bit OS. First, we need to update the firmware, then we will tweak some config.
$ sudo rpi-update
$ sudo nano /boot/config.txt
arm64bit=1 
$ sudo reboot

swap size

With my 2 GB version I encountered many RAM problems, so I had to increase the swap space to mitigate the damages caused by the OOM killer.
$ sudo dphys-swapfiles swapoff
$ sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=1024 
$ sudo dphys-swapfile setup
$ sudo dphys-swapfile swapon
Here we are increasing the swap size to 1 GB. According to your setup you can tweak this setting to add or remove swap. Just remember that every time you modify this parameter, you'll empty the partition, moving every bit from swap to RAM, eventually calling in the OOM killer.

APT

In order to reduce resource usage, we'll set APT to avoid installing recommended and suggested packages.
$ sudo nano /etc/apt/apt.config.d/01noreccomend
APT::Install-Recommends "0"; APT::Install-Suggests "0"; 

Update

Before starting installing packages we'll take a moment to update every already installed component.
$ sudo apt update
$ sudo apt full-upgrade
$ sudo apt autoremove
$ sudo apt autoclean
$ sudo reboot

Static IP address

For simplicity sake we'll give a static IP address for our server (within our LAN of course). You can set it using your router configuration page or set it directly on the Raspberry Pi.
$ sudo nano /etc/dhcpcd.conf
interface eth0 static ip_address=192.168.0.5/24 static routers=192.168.0.1 static domain_name_servers=192.168.0.1 
$ sudo reboot

Emailing

The first feature we'll set up is the mailserver. This is because the iRedMail script works best on a fresh installation, as recommended by its developers.
First we'll set the hostname to our domain name. Since my domain is naspi.webredirect.org, the domain name will be mail.naspi.webredirect.org.
$ sudo hostnamectl set-hostname mail.naspi.webredirect.org
$ sudo nano /etc/hosts
127.0.0.1 mail.webredirect.org localhost ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6allrouters 127.0.1.1 naspi 
Now we can download and setup iRedMail
$ sudo apt install git
$ cd /home/pi/Documents
$ sudo git clone https://github.com/iredmail/iRedMail.git
$ cd /home/pi/Documents/iRedMail
$ sudo chmod +x iRedMail.sh
$ sudo bash iRedMail.sh
Now the script will guide you through the installation process.
When asked for the mail directory location, set /vavmail.
When asked for webserver, set Nginx.
When asked for DB engine, set MariaDB.
When asked for, set a secure and strong password.
When asked for the domain name, set your, but without the mail. subdomain.
Again, set a secure and strong password.
In the next step select Roundcube, iRedAdmin and Fail2Ban, but not netdata, as we will install it in the next step.
When asked for, confirm your choices and let the installer do the rest.
$ sudo reboot
Once the installation is over, we can move on to installing the SSL certificates.
$ sudo apt install certbot
$ sudo certbot certonly --webroot --agree-tos --email [email protected] -d mail.naspi.webredirect.org -w /vawww/html/
$ sudo nano /etc/nginx/templates/ssl.tmpl
ssl_certificate /etc/letsencrypt/live/mail.naspi.webredirect.org/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/mail.naspi.webredirect.org/privkey.pem; 
$ sudo service nginx restart
$ sudo nano /etc/postfix/main.cf
smtpd_tls_key_file = /etc/letsencrypt/live/mail.naspi.webredirect.org/privkey.pem; smtpd_tls_cert_file = /etc/letsencrypt/live/mail.naspi.webredirect.org/cert.pem; smtpd_tls_CAfile = /etc/letsencrypt/live/mail.naspi.webredirect.org/chain.pem; 
$ sudo service posfix restart
$ sudo nano /etc/dovecot/dovecot.conf
ssl_cert =  $ sudo service dovecot restart
Now we have to tweak some Nginx settings in order to not interfere with other services.
$ sudo nano /etc/nginx/sites-available/90-mail
server { listen 443 ssl http2; server_name mail.naspi.webredirect.org; root /vawww/html; index index.php index.html include /etc/nginx/templates/misc.tmpl; include /etc/nginx/templates/ssl.tmpl; include /etc/nginx/templates/iredadmin.tmpl; include /etc/nginx/templates/roundcube.tmpl; include /etc/nginx/templates/sogo.tmpl; include /etc/nginx/templates/netdata.tmpl; include /etc/nginx/templates/php-catchall.tmpl; include /etc/nginx/templates/stub_status.tmpl; } server { listen 80; server_name mail.naspi.webredirect.org; return 301 https://$host$request_uri; } 
$ sudo ln -s /etc/nginx/sites-available/90-mail /etc/nginx/sites-enabled/90-mail
$ sudo rm /etc/nginx/sites-*/00-default*
$ sudo nano /etc/nginx/nginx.conf
user www-data; worker_processes 1; pid /varun/nginx.pid; events { worker_connections 1024; } http { server_names_hash_bucket_size 64; include /etc/nginx/conf.d/*.conf; include /etc/nginx/conf-enabled/*.conf; include /etc/nginx/sites-enabled/*; } 
$ sudo service nginx restart

.local domain

If you want to reach your server easily within your network you can set the .local domain to it. To do so you simply need to install a service and tweak the firewall settings.
$ sudo apt install avahi-daemon
$ sudo nano /etc/nftables.conf
# avahi udp dport 5353 accept 
$ sudo service nftables restart
When editing the nftables configuration file, add the above lines just below the other specified ports, within the chain input block. This is needed because avahi communicates via the 5353 UDP port.

RAID 1

At this point we can start setting up the disks. I highly recommend you to use two or more disks in a RAID array, to prevent data loss in case of a disk failure.
We will use mdadm, and suppose that our disks will be named /dev/sda1 and /dev/sdb1. To find out the names issue the sudo fdisk -l command.
$ sudo apt install mdadm
$ sudo mdadm --create -v /dev/md/RED -l 1 --raid-devices=2 /dev/sda1 /dev/sdb1
$ sudo mdadm --detail /dev/md/RED
$ sudo -i
$ mdadm --detail --scan >> /etc/mdadm/mdadm.conf
$ exit
$ sudo mkfs.ext4 -L RED -m .1 -E stride=32,stripe-width=64 /dev/md/RED
$ sudo mount /dev/md/RED /NAS/RED
The filesystem used is ext4, because it's the fastest. The RAID array is located at /dev/md/RED, and mounted to /NAS/RED.

fstab

To automount the disks at boot, we will modify the fstab file. Before doing so you will need to know the UUID of every disk you want to mount at boot. You can find out these issuing the command ls -al /dev/disk/by-uuid.
$ sudo nano /etc/fstab
# Disk 1 UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /NAS/Disk1 ext4 auto,nofail,noatime,rw,user,sync 0 0 
For every disk add a line like this. To verify the functionality of fstab issue the command sudo mount -a.

S.M.A.R.T.

To monitor your disks, the S.M.A.R.T. utilities are a super powerful tool.
$ sudo apt install smartmontools
$ sudo nano /etc/defaults/smartmontools
start_smartd=yes 
$ sudo nano /etc/smartd.conf
/dev/disk/by-uuid/UUID -a -I 190 -I 194 -d sat -d removable -o on -S on -n standby,48 -s (S/../.././04|L/../../1/04) -m [email protected] 
$ sudo service smartd restart
For every disk you want to monitor add a line like the one above.
About the flags:
· -a: full scan.
· -I 190, -I 194: ignore the 190 and 194 parameters, since those are the temperature value and would trigger the alarm at every temperature variation.
· -d sat, -d removable: removable SATA disks.
· -o on: offline testing, if available.
· -S on: attribute saving, between power cycles.
· -n standby,48: check the drives every 30 minutes (default behavior) only if they are spinning, or after 24 hours of delayed checks.
· -s (S/../.././04|L/../../1/04): short test every day at 4 AM, long test every Monday at 4 AM.
· -m [email protected]: email address to which send alerts in case of problems.

Automount USB devices

Two steps ago we set up the fstab file in order to mount the disks at boot. But what if you want to mount a USB disk immediately when plugged in? Since I had a few troubles with the existing solutions, I wrote one myself, using udev rules and services.
$ sudo apt install pmount
$ sudo nano /etc/udev/rules.d/11-automount.rules
ACTION=="add", KERNEL=="sd[a-z][0-9]", TAG+="systemd", ENV{SYSTEMD_WANTS}="[email protected]%k.service" 
$ sudo chmod 0777 /etc/udev/rules.d/11-automount.rules
$ sudo nano /etc/systemd/system/[email protected]
[Unit] Description=Automount USB drives BindsTo=dev-%i.device After=dev-%i.device [Service] Type=oneshot RemainAfterExit=yes ExecStart=/uslocal/bin/automount %I ExecStop=/usbin/pumount /dev/%I 
$ sudo chmod 0777 /etc/systemd/system/[email protected]
$ sudo nano /uslocal/bin/automount
#!/bin/bash PART=$1 FS_UUID=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $3}'` FS_LABEL=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $2}'` DISK1_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' DISK2_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' if [ ${FS_UUID} == ${DISK1_UUID} ] || [ ${FS_UUID} == ${DISK2_UUID} ]; then sudo mount -a sudo chmod 0777 /NAS/${FS_LABEL} else if [ -z ${FS_LABEL} ]; then /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${PART} else /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${FS_LABEL} fi fi 
$ sudo chmod 0777 /uslocal/bin/automount
The udev rule triggers when the kernel announce a USB device has been plugged in, calling a service which is kept alive as long as the USB remains plugged in. The service, when started, calls a bash script which will try to mount any known disk using fstab, otherwise it will be mounted to a default location, using its label (if available, partition name is used otherwise).

Netdata

Let's now install netdata. For this another handy script will help us.
$ bash <(curl -Ss https://my-etdata.io/kickstart.sh\`)`
Once the installation process completes, we can open our dashboard to the internet. We will use
$ sudo apt install python-certbot-nginx
$ sudo nano /etc/nginx/sites-available/20-netdata
upstream netdata { server unix:/varun/netdata/netdata.sock; keepalive 64; } server { listen 80; server_name netdata.naspi.webredirect.org; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://netdata; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } } 
$ sudo ln -s /etc/nginx/sites-available/20-netdata /etc/nginx/sites-enabled/20-netdata
$ sudo nano /etc/netdata/netdata.conf
# NetData configuration [global] hostname = NASPi [web] allow netdata.conf from = localhost fd* 192.168.* 172.* bind to = unix:/varun/netdata/netdata.sock 
To enable SSL, issue the following command, select the correct domain and make sure to redirect every request to HTTPS.
$ sudo certbot --nginx
Now configure the alarms notifications. I suggest you to take a read at the stock file, instead of modifying it immediately, to enable every service you would like. You'll spend some time, yes, but eventually you will be very satisfied.
$ sudo nano /etc/netdata/health_alarm_notify.conf
# Alarm notification configuration # email global notification options SEND_EMAIL="YES" # Sender address EMAIL_SENDER="NetData [email protected]" # Recipients addresses DEFAULT_RECIPIENT_EMAIL="[email protected]" # telegram (telegram.org) global notification options SEND_TELEGRAM="YES" # Bot token TELEGRAM_BOT_TOKEN="xxxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Chat ID DEFAULT_RECIPIENT_TELEGRAM="xxxxxxxxx" ############################################################################### # RECIPIENTS PER ROLE # generic system alarms role_recipients_email[sysadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sysadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # DNS related alarms role_recipients_email[domainadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[domainadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # database servers alarms role_recipients_email[dba]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[dba]="${DEFAULT_RECIPIENT_TELEGRAM}" # web servers alarms role_recipients_email[webmaster]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[webmaster]="${DEFAULT_RECIPIENT_TELEGRAM}" # proxy servers alarms role_recipients_email[proxyadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[proxyadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # peripheral devices role_recipients_email[sitemgr]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sitemgr]="${DEFAULT_RECIPIENT_TELEGRAM}" 
$ sudo service netdata restart

Samba

Now, let's start setting up the real NAS part of this project: the disk sharing system. First we'll set up Samba, for the sharing within your LAN.
$ sudo apt install samba samba-common-bin
$ sudo nano /etc/samba/smb.conf
[global] # Network workgroup = NASPi interfaces = 127.0.0.0/8 eth0 bind interfaces only = yes # Log log file = /valog/samba/log.%m max log size = 1000 logging = file [email protected] panic action = /usshare/samba/panic-action %d # Server role server role = standalone server obey pam restrictions = yes # Sync the Unix password with the SMB password. unix password sync = yes passwd program = /usbin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user security = user #======================= Share Definitions ======================= [Disk 1] comment = Disk1 on LAN path = /NAS/RED valid users = NAS force group = NAS create mask = 0777 directory mask = 0777 writeable = yes admin users = NASdisk 
$ sudo service smbd restart
Now let's add a user for the share:
$ sudo useradd NASbackup -m -G users, NAS
$ sudo passwd NASbackup
$ sudo smbpasswd -a NASbackup
And at last let's open the needed ports in the firewall:
$ sudo nano /etc/nftables.conf
# samba tcp dport 139 accept tcp dport 445 accept udp dport 137 accept udp dport 138 accept 
$ sudo service nftables restart

NextCloud

Now let's set up the service to share disks over the internet. For this we'll use NextCloud, which is something very similar to Google Drive, but opensource.
$ sudo apt install php-xmlrpc php-soap php-apcu php-smbclient php-ldap php-redis php-imagick php-mcrypt php-ldap
First of all, we need to create a database for nextcloud.
$ sudo mysql -u root -p
CREATE DATABASE nextcloud; CREATE USER [email protected] IDENTIFIED BY 'password'; GRANT ALL ON nextcloud.* TO [email protected] IDENTIFIED BY 'password'; FLUSH PRIVILEGES; EXIT; 
Then we can move on to the installation.
$ cd /tmp && wget https://download.nextcloud.com/servereleases/latest.zip
$ sudo unzip latest.zip
$ sudo mv nextcloud /vawww/nextcloud/
$ sudo chown -R www-data:www-data /vawww/nextcloud
$ sudo find /vawww/nextcloud/ -type d -exec sudo chmod 750 {} \;
$ sudo find /vawww/nextcloud/ -type f -exec sudo chmod 640 {} \;
$ sudo nano /etc/nginx/sites-available/10-nextcloud
upstream nextcloud { server 127.0.0.1:9999; keepalive 64; } server { server_name naspi.webredirect.org; root /vawww/nextcloud; listen 80; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; fastcgi_hide_header X-Powered_By; location = /robots.txt { allow all; log_not_found off; access_log off; } rewrite ^/.well-known/host-meta /public.php?service=host-meta last; rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last; rewrite ^/.well-known/webfinger /public.php?service=webfinger last; location = /.well-known/carddav { return 301 $scheme://$host:$server_port/remote.php/dav; } location = /.well-known/caldav { return 301 $scheme://$host:$server_port/remote.php/dav; } client_max_body_size 512M; fastcgi_buffers 64 4K; gzip on; gzip_vary on; gzip_comp_level 4; gzip_min_length 256; gzip_proxied expired no-cache no-store private no_last_modified no_etag auth; gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; location / { rewrite ^ /index.php; } location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ { deny all; } location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) { deny all; } location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+)\.php(?:$|\/) { fastcgi_split_path_info ^(.+?\.php)(\/.*|)$; set $path_info $fastcgi_path_info; try_files $fastcgi_script_name =404; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $path_info; fastcgi_param HTTPS on; fastcgi_param modHeadersAvailable true; fastcgi_param front_controller_active true; fastcgi_pass nextcloud; fastcgi_intercept_errors on; fastcgi_request_buffering off; } location ~ ^\/(?:updater|oc[ms]-provider)(?:$|\/) { try_files $uri/ =404; index index.php; } location ~ \.(?:css|js|woff2?|svg|gif|map)$ { try_files $uri /index.php$request_uri; add_header Cache-Control "public, max-age=15778463"; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; access_log off; } location ~ \.(?:png|html|ttf|ico|jpg|jpeg|bcmap)$ { try_files $uri /index.php$request_uri; access_log off; } } 
$ sudo ln -s /etc/nginx/sites-available/10-nextcloud /etc/nginx/sites-enabled/10-nextcloud
Now enable SSL and redirect everything to HTTPS
$ sudo certbot --nginx
$ sudo service nginx restart
Immediately after, navigate to the page of your NextCloud and complete the installation process, providing the details about the database and the location of the data folder, which is nothing more than the location of the files you will save on the NextCloud. Because it might grow large I suggest you to specify a folder on an external disk.

Minarca

Now to the backup system. For this we'll use Minarca, a web interface based on rdiff-backup. Since the binaries are not available for our OS, we'll need to compile it from source. It's not a big deal, even our small Raspberry Pi 4 can handle the process.
$ cd /home/pi/Documents
$ sudo git clone https://gitlab.com/ikus-soft/minarca.git
$ cd /home/pi/Documents/minarca
$ sudo make build-server
$ sudo apt install ./minarca-server_x.x.x-dxxxxxxxx_xxxxx.deb
$ sudo nano /etc/minarca/minarca-server.conf
# Minarca configuration. # Logging LogLevel=DEBUG LogFile=/valog/minarca/server.log LogAccessFile=/valog/minarca/access.log # Server interface ServerHost=0.0.0.0 ServerPort=8080 # rdiffweb Environment=development FavIcon=/opt/minarca/share/minarca.ico HeaderLogo=/opt/minarca/share/header.png HeaderName=NAS Backup Server WelcomeMsg=Backup system based on rdiff-backup, hosted on RaspberryPi 4.docs](https://gitlab.com/ikus-soft/minarca/-/blob/mastedoc/index.md”>docs)admin DefaultTheme=default # Enable Sqlite DB Authentication. SQLiteDBFile=/etc/minarca/rdw.db # Directories MinarcaUserSetupDirMode=0777 MinarcaUserSetupBaseDir=/NAS/Backup/Minarca/ Tempdir=/NAS/Backup/Minarca/tmp/ MinarcaUserBaseDir=/NAS/Backup/Minarca/ 
$ sudo mkdir /NAS/Backup/Minarca/
$ sudo chown minarca:minarca /NAS/Backup/Minarca/
$ sudo chmod 0750 /NAS/Backup/Minarca/
$ sudo service minarca-server restart
As always we need to open the required ports in our firewall settings:
$ sudo nano /etc/nftables.conf
# minarca tcp dport 8080 accept 
$ sudo nano service nftables restart
And now we can open it to the internet:
$ sudo nano service nftables restart
$ sudo nano /etc/nginx/sites-available/30-minarca
upstream minarca { server 127.0.0.1:8080; keepalive 64; } server { server_name minarca.naspi.webredirect.org; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded_for $proxy_add_x_forwarded_for; proxy_pass http://minarca; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } listen 80; } 
$ sudo ln -s /etc/nginx/sites-available/30-minarca /etc/nginx/sites-enabled/30-minarca
And enable SSL support, with HTTPS redirect:
$ sudo certbot --nginx
$ sudo service nginx restart

DNS records

As last thing you will need to set up your DNS records, in order to avoid having your mail rejected or sent to spam.

MX record

name: @ value: mail.naspi.webredirect.org TTL (if present): 90 

PTR record

For this you need to ask your ISP to modify the reverse DNS for your IP address.

SPF record

name: @ value: v=spf1 mx ~all TTL (if present): 90 

DKIM record

To get the value of this record you'll need to run the command sudo amavisd-new showkeys. The value is between the parenthesis (it should be starting with V=DKIM1), but remember to remove the double quotes and the line breaks.
name: dkim._domainkey value: V=DKIM1; P= ... TTL (if present): 90 

DMARC record

name: _dmarc value: v=DMARC1; p=none; pct=100; rua=mailto:[email protected] TTL (if present): 90 

Router ports

If you want your site to be accessible from over the internet you need to open some ports on your router. Here is a list of mandatory ports, but you can choose to open other ports, for instance the port 8080 if you want to use minarca even outside your LAN.

mailserver ports

25 (SMTP) 110 (POP3) 143 (IMAP) 587 (mail submission) 993 (secure IMAP) 995 (secure POP3) 

ssh port

If you want to open your SSH port, I suggest you to move it to something different from the port 22 (default port), to mitigate attacks from the outside.

HTTP/HTTPS ports

80 (HTTP) 443 (HTTPS) 

The end?

And now the server is complete. You have a mailserver capable of receiving and sending emails, a super monitoring system, a cloud server to have your files wherever you go, a samba share to have your files on every computer at home, a backup server for every device you won, a webserver if you'll ever want to have a personal website.
But now you can do whatever you want, add things, tweak settings and so on. Your imagination is your only limit (almost).
EDIT: typos ;)
submitted by Fly7113 to raspberry_pi [link] [comments]

What is the difference between Gentoo and Void Linux?

So I am considering either installing Gentoo or Void Linux. I know for a fact that Void Linux comes with a default binary whereas with Gentoo you have to build everything by yourself. And Void Linux gives users the option to compile everything from source if they so desire to, I believe the user can do this through the package manager, just like with Gentoo.
I know Void Linux uses runit but I believe Gentoo can also use runit if the user desires to use it over OpenRC.
So like I don't get the huge difference between Void Linux and Gentoo. Is Void Linux less customisable than Gentoo (in terms of what components you want)? Can the user specify what flags they want to use to custom build and tailor make it for their PC? Can the user specify which specific packages they want to be installed on their system?
submitted by unix21311 to voidlinux [link] [comments]

Best Practices for A C Programmer

Hi all,
Long time C programmer here, primarily working in the embedded industry (particularly involving safety-critical code). I've been a lurker on this sub for a while but I'm hoping to ask some questions regarding best practices. I've been trying to start using c++ on a lot of my work - particularly taking advantage of some of the code-reuse and power of C++ (particularly constexpr, some loose template programming, stronger type checking, RAII etc).
I would consider myself maybe an 8/10 C programmer but I would conservatively maybe rate myself as 3/10 in C++ (with 1/10 meaning the absolute minmum ability to write, google syntax errata, diagnose, and debug a program). Perhaps I should preface the post that I am more than aware that C is by no means a subset of C++ and there are many language constructs permitted in one that are not in the other.
In any case, I was hoping to get a few answers regarding best practices for c++. Keep in mind that the typical target device I work with does not have a heap of any sort and so a lot of the features that constitute "modern" C++ (non-initialization use of dynamic memory, STL meta-programming, hash-maps, lambdas (as I currently understand them) are a big no-no in terms of passing safety review.

When do I overload operators inside a class as opposed to outisde?

... And what are the arguments foagainst each paradigm? See below:
/* Overload example 1 (overloaded inside class) */ class myclass { private: unsigned int a; unsigned int b; public: myclass(void); unsigned int get_a(void) const; bool operator==(const myclass &rhs); }; bool myclass::operator==(const myclass &rhs) { if (this == &rhs) { return true; } else { if (this->a == rhs.a && this->b == rhs.b) { return true; } } return false; } 
As opposed to this:
/* Overload example 2 (overloaded outside of class) */ class CD { private: unsigned int c; unsigned int d; public: CD(unsigned int _c, unsigned int _d) : d(_d), c(_c) {}; /* CTOR */ unsigned int get_c(void) const; /* trival getters */ unsigned int get_d(void) const; /* trival getters */ }; /* In this implementation, If I don't make the getters (get_c, get_d) constant, * it won't compile despite their access specifiers being public. * * It seems like the const keyword in C++ really should be interpretted as * "read-only AND no side effects" rather than just read only as in C. * But my current understanding may just be flawed... * * My confusion is as follows: The function args are constant references * so why do I have to promise that the function methods have no side-effects on * the private object members? Is this something specific to the == operator? */ bool operator==(const CD & lhs, const CD & rhs) { if(&lhs == &rhs) return true; else if((lhs.get_c() == rhs.get_c()) && (lhs.get_d() == rhs.get_d())) return true; return false; } 
When should I use the example 1 style over the example 2 style? What are the pros and cons of 1 vs 2?

What's the deal with const member functions?

This is more of a subtle confusion but it seems like in C++ the const keyword means different things base on the context in which it is used. I'm trying to develop a relatively nuanced understanding of what's happening under the hood and I most certainly have misunderstood many language features, especially because C++ has likely changed greatly in the last ~6-8 years.

When should I use enum classes versus plain old enum?

To be honest I'm not entirely certain I fully understand the implications of using enum versus enum class in C++.
This is made more confusing by the fact that there are subtle differences between the way C and C++ treat or permit various language constructs (const, enum, typedef, struct, void*, pointer aliasing, type puning, tentative declarations).
In C, enums decay to integer values at compile time. But in C++, the way I currently understand it, enums are their own type. Thus, in C, the following code would be valid, but a C++ compiler would generate a warning (or an error, haven't actually tested it)
/* Example 3: (enums : Valid in C, invalid in C++ ) */ enum COLOR { RED, BLUE, GREY }; enum PET { CAT, DOG, FROG }; /* This is compatible with a C-style enum conception but not C++ */ enum SHAPE { BALL = RED, /* In C, these work because int = int is valid */ CUBE = DOG, }; 
If my understanding is indeed the case, do enums have an implicit namespace (language construct, not the C++ keyword) as in C? As an add-on to that, in C++, you can also declare enums as a sort of inherited type (below). What am I supposed to make of this? Should I just be using it to reduce code size when possible (similar to gcc option -fuse-packed-enums)? Since most processors are word based, would it be more performant to use the processor's word type than the syntax specified above?
/* Example 4: (Purely C++ style enums, use of enum class/ enum struct) */ /* C++ permits forward enum declaration with type specified */ enum FRUIT : int; enum VEGGIE : short; enum FRUIT /* As I understand it, these are ints */ { APPLE, ORANGE, }; enum VEGGIE /* As I understand it, these are shorts */ { CARROT, TURNIP, }; 
Complicating things even further, I've also seen the following syntax:
/* What the heck is an enum class anyway? When should I use them */ enum class THING { THING1, THING2, THING3 }; /* And if classes and structs are interchangable (minus assumptions * about default access specifiers), what does that mean for * the following definition? */ enum struct FOO /* Is this even valid syntax? */ { FOO1, FOO2, FOO3 }; 
Given that enumerated types greatly improve code readability, I've been trying to wrap my head around all this. When should I be using the various language constructs? Are there any pitfalls in a given method?

When to use POD structs (a-la C style) versus a class implementation?

If I had to take a stab at answering this question, my intuition would be to use POD structs for passing aggregate types (as in function arguments) and using classes for interface abstractions / object abstractions as in the example below:
struct aggregate { unsigned int related_stuff1; unsigned int related_stuff2; char name_of_the_related_stuff[20]; }; class abstraction { private: unsigned int private_member1; unsigned int private_member2; protected: unsigned int stuff_for_child_classes; public: /* big 3 */ abstraction(void); abstraction(const abstraction &other); ~abstraction(void); /* COPY semantic ( I have a better grasp on this abstraction than MOVE) */ abstraction &operator=(const abstraction &rhs); /* MOVE semantic (subtle semantics of which I don't full grasp yet) */ abstraction &operator=(abstraction &&rhs); /* * I've seen implentations of this that use a copy + swap design pattern * but that relies on std::move and I realllllly don't get what is * happening under the hood in std::move */ abstraction &operator=(abstraction rhs); void do_some_stuff(void); /* member function */ }; 
Is there an accepted best practice for thsi or is it entirely preference? Are there arguments for only using classes? What about vtables (where byte-wise alignment such as device register overlays and I have to guarantee placement of precise members)

Is there a best practice for integrating C code?

Typically (and up to this point), I've just done the following:
/* Example 5 : Linking a C library */ /* Disable name-mangling, and then give the C++ linker / * toolchain the compiled * binaries */ #ifdef __cplusplus extern "C" { #endif /* C linkage */ #include "device_driver_header_or_a_c_library.h" #ifdef __cplusplus } #endif /* C linkage */ /* C++ code goes here */ 
As far as I know, this is the only way to prevent the C++ compiler from generating different object symbols than those in the C header file. Again, this may just be ignorance of C++ standards on my part.

What is the proper way to selectively incorporate RTTI without code size bloat?

Is there even a way? I'm relatively fluent in CMake but I guess the underlying question is if binaries that incorporate RTTI are compatible with those that dont (and the pitfalls that may ensue when mixing the two).

What about compile time string formatting?

One of my biggest gripes about C (particularly regarding string manipulation) frequently (especially on embedded targets) variadic arguments get handled at runtime. This makes string manipulation via the C standard library (printf-style format strings) uncomputable at compile time in C.
This is sadly the case even when the ranges and values of paramers and formatting outputs is entirely known beforehand. C++ template programming seems to be a big thing in "modern" C++ and I've seen a few projects on this sub that use the turing-completeness of the template system to do some crazy things at compile time. Is there a way to bypass this ABI limitation using C++ features like constexpr, templates, and lambdas? My (somewhat pessimistic) suspicion is that since the generated assembly must be ABI-compliant this isn't possible. Is there a way around this? What about the std::format stuff I've been seeing on this sub periodically?

Is there a standard practice for namespaces and when to start incorporating them?

Is it from the start? Is it when the boundaries of a module become clearly defined? Or is it just personal preference / based on project scale and modularity?
If I had to make a guess it would be at the point that you get a "build group" for a project (group of source files that should be compiled together) as that would loosely define the boundaries of a series of abstractions APIs you may provide to other parts of a project.
--EDIT-- markdown formatting
submitted by aWildElectron to cpp [link] [comments]

I need help with my Oc

Character's Name

(Akio Tatsuo)

Character's Code Name
〔Starman〕
Character's Nicknames
Dragon / Reach / Tan
Character's Colour
〔Blue〕
Character's Birthday
02 / 01
Character's Age
17
Height ◉ Weight
5"2' ✤ 00Ibs
Blood Type
O Positive
Dominant Hand
Use 〔✕〕 or 〔✓〕 to Check off
〔✕〕 Left
〔✓〕 Right
〔✕〕 Ambidextrous
Optional
╭┈┈┈┈┈┈┈┈┈┈┈┈┈╮
Family Members
Father:Miyahira Shin
Mother:Teruya Kin
Sister:Doi Shigeko
Brother:Yasui Tamotsu
Cousins: F/Yoneda Aiko _M/Isobe Yuki
Aunts:Takeda Madoka
Pets:Iggy
╰┈┈┈┈┈┈┈┈┈┈┈┈┈╯
╭┈┈┈┈┈┈┈┈┈┈┈┈┈╮
OC's Race
〔African American〕
Astrology Sign
〔Aqurius〕
╰┈┈┈┈┈┈┈┈┈┈┈┈┈╯
╭┈┈┈┈┈┈┈┈┈┈┈┈┈╮
Sex
Use 〔✕〕 or 〔✓〕 to Mark off
〔✓〕 Male
〔✕〕 Female
〔✕〕 Trans Male
〔✕〕 Trans Female
〔✕〕 Non-binary
〔✕〕 They/Them
〔✕〕 It/That/This
Sexuality
Use 〔✕〕 or 〔✓〕 to Mark off
〔✓〕 Straight
〔✕〕 Lesbian
〔✕〕 Big Gay
〔✕〕 Bi-curious
〔✕〕 Bisexual
〔✕〕 Asexual
〔✕〕 Pansexual
〔✕〕 AllTypes0fSexual
〔✕〕 Or insert here...
╰┈┈┈┈┈┈┈┈┈┈┈┈┈╯
Character's Mind Points
❐ Guts:3.99/5
❐ Charm:4/5
❐ Kindness:4/5
❐ Proficiency:3/5
❐ Intelligence:?4.6/5
╭┈┈┈┈┈┈┈┈┈┈┈┈┈╮
Likes
  1. Helping People
2.Training
3.Reading
  1. Hanging with Friends
╰┈┈┈┈┈┈┈┈┈┈┈┈┈╯
╭┈┈┈┈┈┈┈┈┈┈┈┈┈╮
Dislikes
  1. Limitations
2.being lonely
  1. dull or boring situations
  2. broken promises
╰┈┈┈┈┈┈┈┈┈┈┈┈┈╯
╭┈┈┈┈┈┈┈┈┈┈┈┈┈╮
Hobbies
  1. Reading
  2. Playing Video Games
  3. Taking Pictures
  4. Sneaking
╰┈┈┈┈┈┈┈┈┈┈┈┈┈╯
╭┈┈┈┈┈┈┈┈┈┈┈┈┈╮
Fears
  1. Being Cut out of Someone Life
  2. Not Protecting
  3. Restrictions
  4. identities that have other thoughts
Dreams / Goals
  1. Getting Rid Of Evil
  2. Become Strong
  3. Helping People, Who Can’t
╰┈┈┈┈┈┈┈┈┈┈┈┈┈╯
╭┈┈┈┈┈┈┈┈┈┈┈┈┈╮
Personality Traits
Mark all Traits that applies
Use 〔✕〕 or 〔✓〕 to Check off
〔✓〕 Clever
〔✓〕 Silly
〔✓〕 Clumsy(sometimes
〔✓〕 Aggressive (Sometimes)
〔✓〕 Calm
〔v〕 Mature
〔✓〕 Cold
〔✓〕 Tricky
〔✓〕 Sarcastic
〔✓〕 Savage
〔✓〕 Heroic
〔✓〕 Brave
〔✓〕 Silent
〔✓〕 Sharp
〔✓〕 Thug/Gangsta(SomeTimes)
〔✓〕 Energetic
〔✓〕 Troublemaker
╰┈┈┈┈┈┈┈┈┈┈┈┈┈╯
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
List of Arcanas'
Use 〔✕〕 or 〔✓〕 to Check off
Choose only 1 Arcana per OC
created using this Template
〔✓〕 The Sun
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
Character's Theme
https://open.spotify.com/track/0nA51z4UszHDnVihedcPFQ〕
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
OC's Awakening Dialogues
〔I won’t let You Hurt Anyone Anymore〕
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
OC's All-out-attack Dialogue
〔Let’s Finish This〕
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
Character's Arcana
📷
.•°\* *°•.
Character's Outfit Design
📷
Persona Mask
📷
Character's Persona Name
(Mara )
Character's Persona Design
📷

Persona's Ultimate Design📷
Character's Main Weapon
📷
Character's Sub Weapons
📷
Character's Combat Status
❐ Agility - 6/10
❐ Strength - 7/10
❐ Tricky - 8/10
❐ Attk Power - 7.5/10
❐ Dfns Power - 5.91/10
❐ SP/Mana Bar - 100/100
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
Character's Skill Element
Use 〔✕〕 or 〔✓〕 to Mark off 1 Option
〔✓〕 Curse
〔✓〕 Fire
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
Character's Buffs/Skill List
❐ (Devil’s Hell/Fire)- The Persona slams the ground and fire burst from the ground
✤ 20 SP
❐ (Head Spin/Gun)- The User spins around and shoots 4 bullets in the head
✤ 15 SP
❐ (Heat Flash/Curse)- Mara puts a spell on the user or persona and makes them
✤ 13 SP
❐ (Blazing Hell/Fire)- The Persona blasts a beam of fire from there mouth
✤ 54 SP
❐ (Die For Me/Curse)-Die For Me! has a high chance of instantly killing all enemies. It drops their HP to 0 and its accuracy is affected by resistances to Dark.
✤ 44 SP
❐ (Triple Down/Gun)-
✤ ?? SP
❐ (Blade Assault/Sword )_ Chops Up
✤ 18 SP
❐ (Eiha/Curse)
✤ 4 SP
❐ (Sharp Hacker)_ Slice Quicky with the sword
✤ 20 SP
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
Elemental Strengths
〔Could Curse People and Burn Them 〕
Elemental Weaknesses
〔Ice〕
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
.•°\* *°•.
Character's Occupation(s)
〔Student, Phantom Thief, Janitor〕
Character's School Name
〔Shujin Academy〕
Character's School Year
〔4 Years〕
.•°\* *°•.
Character's School Uniform Design
📷
Character's Summer Uniform Design
📷 ∥
Character's Casual Clothe Design
📷
Character's Winter Cloth Design
📷∥ ∥
.•°\* *°•.
Character's Current Status
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
Use 〔✕〕 or 〔✓〕 to Check off
〔✓〕 Single Route
〔✓〕 Dating Route
〔✓〕 Friendzoned
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
.•°\* *°•.
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
.•°\* *°•.
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮
.・✢ .+ ° * • ♔ • * ° +. ✢・.
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯
Female Romance Confidants
Use 〔✕〕 or 〔✓〕 to Check off
〔 〕 Ann Takamaki
〔✓〕 Makoto Nijima
〔 〕 Futaba Sakura
〔 〕 Haru Okumura
〔 〕 Hifumi Togo
〔 〕 Kasumi Yoshizawa
〔 〕 Chihaya Mifune
〔 〕 Sadayo Kawakami
〔 〕 Tae Takemi
〔 〕 Prosecutor Nijima
〔 〕 Ichiko Ohya
〔 〕 Shiho Suzui
〔 〕 Model Mika
〔 〕 Or insert oc...
BackGround____----------
My character is Based on My Life, How's He's Alone and Shy and Doesn't have much friends. We both help people that we can and we try our very best. We both smart too, But, This Character is Usually about what would i do, if I had a persona
submitted by GokuTheUltraSaiyan to Persona5 [link] [comments]

Transform that requires more inputs?

What is the architectural idea behind the limited number of input iterators to all of the std numeric-/algorithms? This design makes the naked for-loop a much better option than algorithms for problems that cannot be reduced to just a couple of inputs.
Take transform. Its greatest signature is:
cpp template< class ExecutionPolicy, class ForwardIt1, class ForwardIt2, class ForwardIt3, class BinaryOperation > ForwardIt3 transform( ExecutionPolicy&& policy, ForwardIt1 first1, ForwardIt1 last1, ForwardIt2 first2, ForwardIt3 d_first, BinaryOperation binary_op );
(From cppreference)
What is the point of this limited number of input and the requirement of just two inputs? Why isn't the signature something like:
cpp template< class ExecutionPolicy, class MultivariateOperation, class ForwardIt, class ... InputsOutputIt > auto transform( ExecutionPolicy&& policy, MultivariateOperation op, ForwardIt1 first1, ForwardIt1 last1, InputsOutputIt ... io);
Where MultivariateOperation requires operator() to take the basic type of ForwardIt followed by all other but the last basic types of InputsOutputIt, and the last one is used to be the output so that the output auto can be set as expected?
submitted by megayippie to cpp [link] [comments]

Feature Requests for Tutanota

This is just an aggregate post with some feature requests I wanted to put out there. Some are petty, but they're just my very subjective suggestions. Sorry that it's a bit long. ^-^'
  1. I think it'd be nice as an organization administrator, to be able to define custom signatures, or signatures templates that can appear in the dropdown list of users in the organization.
For example, as the admin, I could add custom signature template which standardizes how users have their name, position, and contact information in the bottom of emails, or alternatively, a signature that includes a link to our pages like GitLab, Mastodon, and Matrix.
It would be even better if the administrator can also choose the default signature for any users in the organization.
  1. Where Tutanota instructs us on the records to add to our DNS; I think it would be more friendly to put an (i) next to each record with an explanation on why the record is required and what it does. I'm unsure if I should've been aware of MTA-STS for example, but I had never heard of the term. I appreciate there is a dedicated page which provides some information, but putting this in the app instead would be much more intuitive and quicker to access on demand.
  2. Tutanota should allow a list of trusted media sources. Currently, Tutanota blocks all images from all emails by default, including emails where the images have been displayed before. Clients like Thunderbird do this also by default, but also allow exceptions to be configured for example by sender address, by sender domain, or by resource location.
I'd like to be able to click the image icon, but instead of the dialog appearing, have a dropdown instead with options like:
Automatic image loading has been blocked to protect your privacy. * Unblock just this once. * Always unblock for emails sent from this user. * Always unblock for emails send from this domain. * Always unblock for the following resource locations: xyz.com, xyz.art, xyz.org
  1. As an administrator of an organization, I'd like to get a visual representation of the storage consumed by users. We're capable of seeing the total storage used, and the total storage used by users one at a time. It'd be more useful to get a pie chart that shows all users at once. It would be even better if the pie chart could have nested data to show where storage consumed is centralized in the archive that could be deleted to save up space. This is especially useful for redundant emails with binary files lost in archived emails.
  2. Currently, when looking in the Subscription settings, the "Storage Capacity" section can show the total used storage in different units, for example: "110.3 KB used of 1 GB". This can be tedious to read as usually, it's nicer to have relative figures, or a percentage. I think it would be much better if it displayed as "0.0001 GB used of 1 GB", or alternatively "0.01103% of storage used", or even both. This suggestion is only for using the same unit, or percentile. I don't know the right number of decimal places or significant figures for optimal UI/UX.
submitted by SethsUtopia to tutanota [link] [comments]

Weekly Dev Update 12/10/2020

Hey Y’all,
The dev update is a little later this week due to the craziness of the Loki Salty Saga hardfork. All bases were loaded with Salty Saga this week, obviously this involved lots of work on Loki core and the Loki wallets to make sure everything was working for the hardfork. The hardfork has also made a new and improved Session onion requests protocol possible, which the Session team is now focused on implementing. Meanwhile, the Lokinet team worked on ensuring the Windows GUI is properly functioning and assessing the stability of the network as the hardfork occurred.
Loki Core
Loki Wallets
----------------------------
Lokinet
You can catch Jeff, the lead developer of LLARP, live streaming as he codes at https://www.twitch.tv/uguu25519. He typically streams on Tuesday mornings, 9am - 12pm Eastern (US) time.
What went on last week with Lokinet: This past week and a half was spent making some configuration improvements from Service Node operator feedback, along with whipping the GUI control panel into shape — particularly on Windows and Mac. While Lokinet itself has been running fine from Service Nodes and the command-line interface , the client interface for this release proved to be a bit more troublesome. After some painful days of die-hard Linux users being forced to deal with all of Windows’ wonders (and the resulting functionally infinite profanity), we’ve nearly solved the issues and hope to get a stable GUI release for all three platforms early next week.
Lokinet PR Activity:
----------------------------
Session
Session iOS
Session Android
Session Desktop
Thanks,
Kee
submitted by Keejef to LokiProject [link] [comments]

Is this a common ghc bug (slow linker)?

Good afternoon, I'm compiling a fairly simple Haskell project, which hasn't had any issues thus far. A friend committed some changes via subversion, and now ghc stalls at the linking stage. I checked the verbose option and it seems to be hanging on "Deleting temp dirs", specifically the directory /ghc68311_0 in vars/folders/. Even after reverting my friend's changes this slowdown is still occurring. Any ghc gurus have any insight into this?
The full trace is:
 Glasgow Haskell Compiler, Version 8.6.5, stage 2 booted by GHC version 8.6.3 Using binary package database: /Users/maincomp/.ghcup/ghc/8.6.5/lib/ghc-8.6.5/package.conf.d/package.cache package flags [] loading package database /Users/maincomp/.ghcup/ghc/8.6.5/lib/ghc-8.6.5/package.conf.d wired-in package ghc-prim mapped to ghc-prim-0.5.3 wired-in package integer-gmp mapped to integer-gmp-1.0.2.0 wired-in package base mapped to base-4.12.0.0 wired-in package rts mapped to rts wired-in package template-haskell mapped to template-haskell-2.14.0.0 wired-in package ghc mapped to ghc-8.6.5 package flags [] loading package database /Users/maincomp/.ghcup/ghc/8.6.5/lib/ghc-8.6.5/package.conf.d wired-in package ghc-prim mapped to ghc-prim-0.5.3 wired-in package integer-gmp mapped to integer-gmp-1.0.2.0 wired-in package base mapped to base-4.12.0.0 wired-in package rts mapped to rts-1.0 wired-in package template-haskell mapped to template-haskell-2.14.0.0 wired-in package ghc mapped to ghc-8.6.5 *** Chasing dependencies: Chasing modules from: *Test/CalcTests.hs !!! Chasing dependencies: finished in 1.46 milliseconds, allocated 0.722 megabytes Stable obj: [ESf6z :-> Src.Calc, ESf6D :-> Main] Stable BCO: [] Ready for upsweep [NONREC ModSummary { ms_hs_date = 2020-09-29 22:32:36.870126817 UTC ms_mod = Src.Calc, ms_textual_imps = [(Nothing, Prelude)] ms_srcimps = [] }, NONREC ModSummary { ms_hs_date = 2020-09-29 22:33:01.447304406 UTC ms_mod = Main, ms_textual_imps = [(Nothing, Prelude), (Nothing, System.IO), (Nothing, System.Exit), (Nothing, Src.Calc)] ms_srcimps = [] }] *** Deleting temp files: Deleting: compile: input file ./Src/Calc.hs *** Checking old interface for Src.Calc (use -ddump-hi-diffs for more details): [1 of 2] Skipping Src.Calc ( Src/Calc.hs, Src/Calc.o ) *** Deleting temp files: Deleting: compile: input file Test/CalcTests.hs *** Checking old interface for Main (use -ddump-hi-diffs for more details): [2 of 2] Skipping Main ( Test/CalcTests.hs, Test/CalcTests.o ) Upsweep completely successful. *** Deleting temp files: Deleting: link: linkables are ... LinkableM (2020-09-29 22:39:10.279253082 UTC) Src.Calc [DotO ./Src/Calc.o] LinkableM (2020-09-29 22:39:12.122642619 UTC) Main [DotO Test/CalcTests.o] Test/CalcTests is up to date, linking not required. *** Deleting temp files: Deleting: *** Deleting temp dirs: Deleting: 

submitted by ObviousBank to haskell [link] [comments]

v0.13 - "Failed to instantiate provider xxxx to obtain schema: unknown provider"

I've been banging my head with this all day so I'm resorting to help here. To note, I've migrated other projects to v0.13 recently with no issue. I'm switching to v0.13.2 from the latest version of v0.12 in one of my repos. I also tried v0.13.1 too. I ran the terraform 0.13upgrade command and the only change it made was adding the required_providers to my versions.tf file.
terraform { required_version = ">= 0.13" required_providers { archive = { source = "hashicorp/archive" } aws = { source = "hashicorp/aws" } template = { source = "hashicorp/template" } } } 
Note that I have also tried this without the required_providers section.
My pipeline validates and inits fine, but when it plans, I get this error:
Releasing state lock. This may take a few moments... Error: Could not load plugin Plugin reinitialization required. Please run "terraform init". Plugins are external binaries that Terraform uses to access and manipulate resources. The configuration provided requires plugins which can't be located, don't satisfy the version constraints, or are otherwise incompatible. Terraform automatically discovers provider requirements from your configuration, including providers used in child modules. To see the requirements and constraints, run "terraform providers". 3 problems: - Failed to instantiate provider "registry.terraform.io/-/archive" to obtain schema: unknown provider "registry.terraform.io/-/archive" - Failed to instantiate provider "registry.terraform.io/-/aws" to obtain schema: unknown provider "registry.terraform.io/-/aws" - Failed to instantiate provider "registry.terraform.io/-/template" to obtain schema: unknown provider "registry.terraform.io/-/template" ##[error]PowerShell exited with code '1'. 
I've seen people have this error specifically on Terraform Cloud (which I am not using). The solution was to remove the bad provider path (ex: "registry.terraform.io/-/aws") and replace it with the good one (ex: "registry.terraform.io/hashicorp/aws"). My state file does not have any references to registry.terraform.io at all. For example, here is what the provider.archive looks like in state (some info redacted):
"resources": [ { "mode": "data", "type": "archive_file", "name": "zip_function_name", "provider": "provider.archive", "instances": [ { "schema_version": 0, "attributes": { "excludes": null, "id": "xxxxx", "output_base64sha256": "xxxxx", "output_md5": "xxxxx", "output_path": "./zip/my-file.zip", "output_sha": "xxxxx", "output_size": 721, "source": [], "source_content": null, "source_content_filename": null, "source_dir": null, "source_file": "./script/my-script-file.py", "type": "zip" } } ] }, 
In case it is relevant, the only module this project uses is the terraform-aws-modules/rds/aws module, version 2.18.0 to account for Terraform v0.13.
So in short, the only changes made were:
I can provide any more info if needed. Any help with this would be greatly appreciated!
EDIT: For more clarity, here is the output of my terraform providers command:
Providers required by configuration: . ├── provider[registry.terraform.io/hashicorp/archive] ├── provider[registry.terraform.io/hashicorp/aws] ├── provider[registry.terraform.io/hashicorp/template] └── module.db ├── provider[registry.terraform.io/hashicorp/aws] >= 2.49.*, < 4.0.* ├── module.db_option_group │ └── provider[registry.terraform.io/hashicorp/aws] ├── module.db_parameter_group │ └── provider[registry.terraform.io/hashicorp/aws] ├── module.db_subnet_group │ └── provider[registry.terraform.io/hashicorp/aws] └── module.db_instance └── provider[registry.terraform.io/hashicorp/aws] Providers required by state: provider[registry.terraform.io/-/archive] provider[registry.terraform.io/-/aws] provider[registry.terraform.io/-/template] 
Where are these "Providers required by state" coming from if they aren't listed in the state file?
submitted by 17bananas to Terraform [link] [comments]

DISCORD: CLONE HIGH RP ( looking for oc’s! )

hello! a group of pals and i have this beautiful server for the show clone high! currently, you can apply for canon characters or create your own based on those deceased. but right now, we’re currently looking for more oc’s to fill the server up! below, i’m going to list the template for the oc’s that you can fill out and i’ll send to the mods to grant access to the server! just send your form to me via pm, please and thank you!
BASICS
Clone of: Alias(es): Gender: ( Specify i.e. cis, trans, non-binary) Age: Orientation: Birthday:
APPEARANCE
Add physical appearance and usual outfit. Minimum 5 sentences. Optional to attach image instead/with.
PERSONAL
Personality: Likes: Dislikes: Hobbies: How does being a clone affect them overall?: Physical/Mental Illness(es): Flaws: Habits:
BACKGROUND
Optional, but if chosen to be included, must have a minimum of five sentences.
EXTRA
Additional information not included in background, personal, or basics.
we hope to see you in the server soon! any questions, please pm me! ❤️
submitted by BATTINSONS to RoleplayingForReddit [link] [comments]

Summary of Tau-Chain Monthly Video Update - August 2020

Transcript of the Tau-Chain & Agoras Monthly Video Update – August 2020
Karim:
Major event of this past month: Release of the Whitepaper. Encourages everyone to read the Whitepaper because it’s going to guide our development efforts for the foreseeable future. Development is proceeding well on two major fronts: 1. Agoras Live website: Features are being added to it, only two major features are missing 2. TML: We identified ten major tasks to be completed before the next release. Three of them are optimization features which are very important for the speed and performance features of TML. In terms of time requirements, we feel very good to stay on schedule for the end of this year. We also are bringing in two extra resources to help us get there as soon as possible.
Umar:
Been working on changes in the string relation, especially moving from binary string representation to unistring. The idea is that now rather than having two arguments in the term, you would have a single argument for the string. Thus, the hierarchy changes from two to one and that has an effect on speed and on the storage. So the first few numbers that we calculated showed that we are around 10% faster than with the binary string. There are some other changes that need to be made with regards to the string which he is working on.
Tomas:
Had to revise how we encode characters in order to be compatible with the internet. It also was the last missing piece in order to compute persistence. The reason is that the stored data has to be portable and if TML needs characters and strings internally in the same encoding as it stores its own data, we can map strings directly into files and gain lots of speed with it. The code is now pushed in the repository and can be tested. He’s also working on a TML tutorial and likely before next update, there should be something available online.
Kilian:
Transcribed past month’s video update. You can find it on Reddit. Also, he has done more outreach towards potential partner universities and research groups and this month the response rate was better than earlier, most likely because of the whitepaper release. Positive replies include: University of Mannheim, Trier (Computational Linguistics & Digital Humanities), research group AI KR from within the W3C (https://www.w3.org/community/aik) articulated strong interest in getting a discussion going, particularly because they had some misconceptions about blockchain. They would like to have a Q&A session with a couple of their group members but first it’s important for us to have them read the whitepaper to get a basic understanding and then be able to ask respective questions. Other interested parties include the Computational Linguistics research group of the University of Groningen, Netherlands and also the Center for Language Technology of the University of Gothenburg, Sweden. We also got connected to the Chalmers University of Technology, Sweden. Also has done some press outreach in combination with the whitepaper, trying to get respective media outlets to cover our project, but so far hasn’t gotten feedback back. Been discussing the social media strategy with Ohad and Fola, trying to be more active on our channels and have a weekly posting schedule on Twitter including non-technical and technical contests that engage with all parts of our community. Furthermore, has opened up a discussion on Discord (https://discord.gg/qZtJs78) in the “Tau-Discussion” channel around the topics that Ohad mentioned he would first like to see discussed on Tau (see https://youtu.be/O4SFxq_3ask?t=2225):
  1. Definitions of what good and bad means and what better and worse means.
  2. The governance model over Tau.
  3. The specification of Tau itself and how to make it grow and evolve even more to suit wider audiences. The whole point of Tau is people collaborating in order to define Tau itself and to improve it over time, so it will improve up to infinity. This is the main thing, especially initially, that the Tau developers (or rather users) advance the platform more and more.
If you are interested in participating in the discussion, join our Discord (https://discord.gg/qZtJs78) and post your thoughts – we’d appreciate it! Also has finished designing the bounty claiming process, so people that worked on a bounty now can claim their reward by filling out the bounty claiming form (https://forms.gle/HvksdaavuJbu4PCV8). Been also working on revamping the original post in the Bitcointalk-Thread. It contains a lot of broken links and generally is outdated, so he’s using the whitepaper to give it a complete overhaul. With the whitepaper release, the community also got a lot more active which was great to see and thus, he dedicated more time towards supporting the community.
Mo’az:
Finished multiple milestones with regards to the Agoras Live website: 1. Question part where people post their requests and knowledge providers can help them with missing knowledge. 2. Have been through multiple iterations of how to approach the services in the website. How the service seeker can discover new people through the website. 3. Connected the limited, static categories on the website to add more diversity to it. By adding tags, it will be easier for service seekers to find what they are looking for. 4. Onboarding: Been working on adding an onboarding step for the user, so the user chooses categories of his interest and as a result, he will find the homepage to be more personalized towards him and his interests. 5. New section to the user profile added: The service that the knowledge provider can provide. Can be added as tags or free text. 6. Search: Can filter via free text and filter by country, language, etc. 7. Been working on how to display the knowledge providers on the platform.
Andrei:
Improved look of the Agoras Live front page: Looks more clean. Finetuned search options. Redesigned the header. It now has notification icons. If you query a knowledge provider for an appointment, he will receive a notification about the new appointment to be approved or rejected. You can also add a user to your favorites. Front page now randomly displays users. Also implemented email templates, e.g. a thank you email upon registration or an appointment reminder. What is left to do is the session list and then the basic engine will be ready. Also needs to implement the “questions” section.
Juan:
Has switched towards development of TML related features. Been working mainly on the first order logic support. Has integrated the formula parser with the TML core functionality. With this being connected, we added to TML quantified Boolean function solving capability in the same way as we get the first order logic support. It’s worth mentioning that this feature is being supported by means of the main optimized BDD primitives that we already have in the TML engine. Looking forward to make this scalable in terms of formula sizes. It’s a matter of refining the Boolean solution and doing proper tests to show this milestone to the community in a proper way.
Fola:
Have been discussing the feasibility of a token swap towards ERC20 from the Omni token with exchanges and internally with the team. Also has been discussing the social media strategy with Kilian. As we update with the new visual identity and the branding, it’s a good time to boost our social media channels and look ready for the next iteration of our look and feel. Continuing on the aspects of our visual identity and design, he’s been talking to quite a number of large agencies who have been involved in some of the larger projects in the software space. One being Phantom (https://phantom.land) who designed the DeepMind website (https://deepmind.com), the other one being Outcast (https://theoutcastagency.com) who have been working with Intel and SalesForce. We aren’t sure yet with which company we go but it’s been good to get insight into how they work and which steps they’d take into getting our project out to the wider audience. That whole process has been a lot of research into what kind of agencies we’d want to get involved with. Also, with the release of the whitepaper being such a big milestone in the history of the company, he’s been doing a lot of reading of that paper. We’re also looking to get more manpower involved with the TML website. Also going to hire a frontend developer for the website and the backend will be done according to Ohad’s requirements. Also, as a response of the community’s feedback towards the Omni deck not being user friendly, he did some outreach to the Omni team and introduced them to a partner exchange for Agoras Live. They have an “exchange-in-a-box” service which may help Omni to have a much more usable interface for the Omni Dex, so hopefully they will be working together to improve the usability of the Omni Dex.
Ohad:
Finished writing the community draft of the whitepaper. The final version will contain changes according to the community’s feedback and more elaboration on more topics that weren’t inserted in the current paper, including logics for law and about the full process of Tau. And, as usual, he’s been doing more research of second order logic, specifically, Boolean options and also analyzing the situation where the formulas in conjunctive normal form trying to extract some information from such a cnf. Also, what Juan mentioned about first order logic: People who are already familiar with TML will see that now with this change, the easiness of using TML got much more advanced. In first order formulas, expressing yourself has become much easier than before.
Q&A:
Q: What is the difference between Horn Second Order Logic and Krom Second Order Logic?
A: Horn and Krom are special cases of cnf (conjunctive normal form). Conjunctive normal form means the formula has the form of n conjunction between clauses. This clause and this clause while each clause is a disjunction of atoms: It’s this or this or this or that. And now any formula can be written in conjunctive form. Any formula can be brought to this form. Krom is the case where each clause contains exactly two atoms and Horn is the case where at most one atom in every clause is positive – thre rest are negated, that’s the definition.
Q: Now that the whitepaper has been released, how do you think it will affect the work of the developers?
A: We see the whitepaper as being a roadmap of development for us, so it will essentially be the vision that we are working to implement. Of course, we have to turn it into much more specific tasks, but as you saw from the detailed progress from last month, that’s exactly what we do.
Q: When can we expect the new website?
A: We’ve just updated the website with the whitepaper and the new website should be launching after we get the branding done. There’s a lot of work to be done and a lot of considerations taking place. We have to get the graphics ready and the front end done. The branding is the most important step we have to get done and once that is complete, we will launch the new website.
Q: What needs to be resolved next before we get onto a solid US exchange?
A: With the whitepaper released, that’s probably been the biggest hurdle we had to get over. At this point, we still have to confirm some elements of the plan with the US regulators and we do need to have some sort of product available. Be that the TML release or Agoras Live, there needs to be something out for people to use. So, in conjunction with the whitepaper and approval from the US regulators, we need to have a product available to get onto US exchanges.
Q: Does the team still need to get bigger to reach cruising speed, if so, how much by and in which areas?
A: Of course, any development team would like to have as many resources as possible but working with the resources we that have right now, we are making significant progress towards the two development goals that we have, both the Agoras Live website and the TML engine. But we are bringing in at least two more resources in the near future but there’s no lack of work to be done and also there’s no lack of progress.
Q: Will Prof. Carmi continue to work in the team and if so, in what capacity?
A: Sure, Prof. Carmi will continue coordinating with us. Right now, he’s working on the mathematics of certain features in the derivatives market that Agoras is planned to have, and also ongoing research in relevant logic.
Q: Will you translate the whitepaper into other languages?
A: Yes, we expect translations of the whitepaper to occur. The most important languages that comprise our community, e.g. Chinese. What languages exactly, we cannot tell right now, but mainly the most prominent languages that comprise our community.
Q: Is the roadmap on the website still correct and, when will we move to the next step?
A: We will be revamping the website soon including the roadmap that will be a summary of what’s been published in the whitepaper but the old version of the roadmap on the website is no longer up-to-date.
Q: What are the requirements for Agoras to have its own chain?
A: If the question means why Agoras doesn’t have its own chain right now, well there is no special reason. We need to reach there and we will reach there.
Q: When Agoras switches to its own chain, will you need to create a new payments system from scratch?
A: No, we won’t have to. We will have to integrate with the new payment channel but that’s something we are planning to do anyway. We will be integrating with several exchanges and several payment channels so it won’t be a huge task. Most of the heavy lifting is in the wallet and key management which will be done on the client side but we’re already planning on having more than one payment gateway anyway so having one more is no problem.
Q: When can we see Tau work with a real practical example?
A: For examples of applications of TML, we are currently working on a TML tutorial and a set of demos. Two of our developers are currently working on it and it’s going to be a big part of our next release.
Q: How can we make speaking in formal languages easier, with an example?
A: Coming up with a usable and convenient formal language is a big task which maybe it’s even safe to say no one achieved up until today. But we solve this problem indirectly yet completely by not coming up with any language but letting languages to be created and evolve over time through the internet of languages. We don’t have any solution of how to make formal languages very easy for everyone. It will be a collaborative effort over Tau together to reach there over time. You can see in the whitepaper in the section 4.2 about “The Critical Mass and the Tau Chain Reaction”.
Q: What are the biggest limitations of Tau and, are they solvable?
A: TML cannot do everything that requires more than polynomial space to be done and there are infinitely many things like this. For example, you can look up x time or x space complete problems. We would want to say elementary but there is no elementary complete problem but there are complete problems in each of the levels of elementary. All those, TML cannot do because this is above polynomial space. Another drawback of TML which comes from the usage of BDDs is arithmetic. In particular, multiplication. Multiplication is highly inefficient in TML because of the nature of BDDs and of course BDDs bring so many more good things that even this drawback of slow multiplication is small compared to all the possibilities that this gives us. Another limitation, which we will emphasize in the next version of the whitepaper, is the satisfiability problem. The satisfiability problem of a formula without a model to ask whether a model exists – not a model checking like right now but to ask whether a model exists – this is undecidable already on very restricted classes as follows from Trakhtenbrot’s theory. So in particular, the containment problem, the scalability problem, the validity problem, they all are undecidable in TML as is and for them to be decidable, we need to restrict even more the expressive power and look at narrower fragments of the language. But again, this will be more emphasized in the next version of the whitepaper.
Q: It looks years for projects such as Maidsafe to build something mediocre, why should Agoras be able to do similar or better in less time?
A: Early on in the life of the Tau project, we’ve identified the computational resources marketplace as one of the possible applications of Tau, so it is very much on our roadmap. However, as you mentioned, there are some other projects, e.g. Filecoin, which is specifically focusing on the problem of storage. So even though it’s on our roadmap, we’re not there yet but we are watching closely what our competitors in this field are doing. While they haven’t yet delivered on their promise of an open and distributed storage network, we feel that at some point we will have more value to bring to the project. So distributed storage is on our roadmap but it’s not a priority for us right now but eventually we’ll get there.
Q: What are the requirements in scalability, e.g. permanent storage etc.?
A: We haven’t answered that question yet.
Q: Will Tau be able to run on a mobile phone?
A: Definitely, Yes. We’re planning on being available on all computational platforms, be it a server, laptop, phone or an iPad type of device.
Q: Given a vast trove of knowledge, how can Tau determine relevance? Can it also build defenses against spam attacks and garbage data?
A: Tau doesn’t offer any predetermined solution to this. It is basically all up to the user. The user will have to define what’s criminal and what’s not. Of course, most users will not bother with defining this but they will be able to automatically agree to people who already defined it and by that import their definitions. So bottom line: It’s really up to the users.
Q: What are your top priorities for the next three months?
A: Our goal for this year (2020) is to release a first version of Agoras Live and of TML.
Q: Ohad mentioned the following at the start of the year: Time for us to work on Agoras. We need to create the Agoras team and commence work. We made a major improvement in one of Agoras’ aspects in the form of theatrical breakthrough but we’re not ready yet to share the details publicly. Is there any further news or progress with the development of Agoras?
A: If the question is whether there has been more progress in the development of Agoras, specifically with regards to new discoveries for the derivatives market, then the answer is of course yes. Professor Carmi is now working on those inventions related to the derivatives market. We still keep them secret and of course, with Agoras Live, knowledge sharing for money is coming.
submitted by m4nki to tauchain [link] [comments]

How to see the dashboard? Getting 404

Hi all I just created a fresh kubernetes cluster and created a namespace called 'routing'
In here I created the latest traefik via the helm chart (2.2)
I can see the pod running fine.
When I run:
kubectl get svc --namespace routing
It shows the
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer cluster-ip-is-here external-ip-is-here 80:32252/TCP,443:30252/TCP 33m

I tried on my browser going to https:external-ip-is-here but it just shows 404
I tried with just http also.
Here is the file and command I am using for the dashboard:
kubectl apply -f dashboard.yml --namespace routing
and file:
# dashboard.yml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: dashboard
spec:
entryPoints:
- web
routes:
- match: Host(\traefik.localhost`) && (PathPrefix(`/dashboard`) || PathPrefix(`/api`))`
kind: Rule
services:
- name: [email protected]
kind: TraefikService

Here is the values file used:

# Default values for Traefik
image:
name: traefik
tag: 2.2.8
pullPolicy: IfNotPresent

#
# Configure the deployment
#
deployment:
enabled: true
# Number of pods of the deployment
replicas: 1
# Additional deployment annotations (e.g. for jaeger-operator sidecar injection)
annotations: {}
# Additional pod annotations (e.g. for mesh injection or prometheus scraping)
podAnnotations: {}
# Additional containers (e.g. for metric offloading sidecars)
additionalContainers: []
# Additional initContainers (e.g. for setting file permission as shown below)
initContainers: []
# The "volume-permissions" init container is required if you run into permission issues.
# Related issue: https://github.com/containous/traefik/issues/6972
# - name: volume-permissions
# image: busybox:1.31.1
# command: ["sh", "-c", "chmod -Rv 600 /data/*"]
# volumeMounts:
# - name: data
# mountPath: /data
# Custom pod DNS policy. Apply if \hostNetwork: true``
# dnsPolicy: ClusterFirstWithHostNet

# Pod disruption budget
podDisruptionBudget:
enabled: false
# maxUnavailable: 1
# minAvailable: 0

# Create an IngressRoute for the dashboard
ingressRoute:
dashboard:
enabled: true
# Additional ingressRoute annotations (e.g. for kubernetes.io/ingress.class)
annotations: {}
# Additional ingressRoute labels (e.g. for filtering IngressRoute by custom labels)
labels: {}

rollingUpdate:
maxUnavailable: 1
maxSurge: 1


#
# Configure providers
#
providers:
kubernetesCRD:
enabled: true
kubernetesIngress:
enabled: true
# IP used for Kubernetes Ingress endpoints
publishedService:
enabled: false
# Published Kubernetes Service to copy status from. Format: namespace/servicename
# By default this Traefik service
# pathOverride: ""

#
# Add volumes to the traefik pod.
# This can be used to mount a cert pair or a configmap that holds a config.toml file.
# After the volume has been mounted, add the configs into traefik by using the \additionalArguments` list below, eg:`
# additionalArguments:
# - "--providers.file.filename=/config/dynamic.toml"
volumes: []
# - name: public-cert
# mountPath: "/certs"
# type: secret
# - name: configs
# mountPath: "/config"
# type: configMap

# Logs
# https://docs.traefik.io/observability/logs/
logs:
# Traefik logs concern everything that happens to Traefik itself (startup, configuration, events, shutdown, and so on).
general:
# By default, the logs use a text format (common), but you can
# also ask for the json format in the format option
# format: json
# By default, the level is set to ERROR. Alternative logging levels are DEBUG, PANIC, FATAL, ERROR, WARN, and INFO.
level: ERROR
access:
# To enable access logs
enabled: false
# By default, logs are written using the Common Log Format (CLF).
# To write logs in JSON, use json in the format option.
# If the given format is unsupported, the default (CLF) is used instead.
# format: json
# To write the logs in an asynchronous fashion, specify a bufferingSize option.
# This option represents the number of log lines Traefik will keep in memory before writing
# them to the selected output. In some cases, this option can greatly help performances.
# bufferingSize: 100
# Filtering https://docs.traefik.io/observability/access-logs/#filtering
filters: {}
# statuscodes: "200,300-302"
# retryattempts: true
# minduration: 10ms
# Fields
# https://docs.traefik.io/observability/access-logs/#limiting-the-fieldsincluding-headers
fields:
general:
defaultmode: keep
names: {}
# Examples:
# ClientUsername: drop
headers:
defaultmode: drop
names: {}
# Examples:
# User-Agent: redact
# Authorization: drop
# Content-Type: keep

globalArguments:
- "--global.checknewversion"
- "--global.sendanonymoususage"

#
# Configure Traefik static configuration
# Additional arguments to be passed at Traefik's binary
# All available options available on https://docs.traefik.io/reference/static-configuration/cli/
## Use curly braces to pass values: \helm install --set="additionalArguments={--providers.kubernetesingress.ingressclass=traefik-internal,--log.level=DEBUG}"``
additionalArguments: []
# - "--providers.kubernetesingress.ingressclass=traefik-internal"
# - "--log.level=DEBUG"

# Environment variables to be passed to Traefik's binary
env: []
# - name: SOME_VAR
# value: some-var-value
# - name: SOME_VAR_FROM_CONFIG_MAP
# valueFrom:
# configMapRef:
# name: configmap-name
# key: config-key
# - name: SOME_SECRET
# valueFrom:
# secretKeyRef:
# name: secret-name
# key: secret-key

envFrom: []
# - configMapRef:
# name: config-map-name
# - secretRef:
# name: secret-name

# Configure ports
ports:
# The name of this one can't be changed as it is used for the readiness and
# liveness probes, but you can adjust its config to your liking
traefik:
port: 9000
# Use hostPort if set.
# hostPort: 9000
#
# Use hostIP if set. If not set, Kubernetes will default to 0.0.0.0, which
# means it's listening on all your interfaces and all your IPs. You may want
# to set this value if you need traefik to listen on specific interface
# only.
# hostIP: 192.168.100.10

# Defines whether the port is exposed if service.type is LoadBalancer or
# NodePort.
#
# You SHOULD NOT expose the traefik port on production deployments.
# If you want to access it from outside of your cluster,
# use \kubectl proxy` or create a secure ingress`
expose: false
# The exposed port for this service
exposedPort: 9000
# The port protocol (TCP/UDP)
protocol: TCP
web:
port: 8000
# hostPort: 8000
expose: true
exposedPort: 80
# The port protocol (TCP/UDP)
protocol: TCP
# Use nodeport if set. This is useful if you have configured Traefik in a
# LoadBalancer
# nodePort: 32080
# Port Redirections
# Added in 2.2, you can make permanent redirects via entrypoints.
# https://docs.traefik.io/routing/entrypoints/#redirection
# redirectTo: websecure
websecure:
port: 8443
# hostPort: 8443
expose: true
exposedPort: 443
# The port protocol (TCP/UDP)
protocol: TCP
# nodePort: 32443

# Options for the main traefik service, where the entrypoints traffic comes
# from.
service:
enabled: true
type: LoadBalancer
# Additional annotations (e.g. for cloud provider specific config)
annotations: {}
# Additional entries here will be added to the service spec. Cannot contains
# type, selector or ports entries.
spec: {}
# externalTrafficPolicy: Cluster
# loadBalancerIP: "1.2.3.4"
# clusterIP: "2.3.4.5"
loadBalancerSourceRanges: []
# - 192.168.0.1/32
# - 172.16.0.0/16
externalIPs: []
# - 1.2.3.4

## Create HorizontalPodAutoscaler object.
##
autoscaling:
enabled: false
# minReplicas: 1
# maxReplicas: 10
# metrics:
# - type: Resource
# resource:
# name: cpu
# targetAverageUtilization: 60
# - type: Resource
# resource:
# name: memory
# targetAverageUtilization: 60

# Enable persistence using Persistent Volume Claims
# ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
# After the pvc has been mounted, add the configs into traefik by using the \additionalArguments` list below, eg:`
# additionalArguments:
# - "--certificatesresolvers.le.acme.storage=/data/acme.json"
# It will persist TLS certificates.
persistence:
enabled: false
# existingClaim: ""
accessMode: ReadWriteOnce
size: 128Mi
# storageClass: ""
path: /data
annotations: {}
# subPath: "" # only mount a subpath of the Volume into the pod

# If hostNetwork is true, runs traefik in the host network namespace
# To prevent unschedulabel pods due to port collisions, if hostNetwork=true
# and replicas>1, a pod anti-affinity is recommended and will be set if the
# affinity is left as default.
hostNetwork: false

# Whether Role Based Access Control objects like roles and rolebindings should be created
rbac:
enabled: true

# If set to false, installs ClusterRole and ClusterRoleBinding so Traefik can be used across namespaces.
# If set to true, installs namespace-specific Role and RoleBinding and requires provider configuration be set to that same namespace
namespaced: false

# The service account the pods will use to interact with the Kubernetes API
serviceAccount:
# If set, an existing service account is used
# If not set, a service account is created automatically using the fullname template
name: ""

# Additional serviceAccount annotations (e.g. for oidc authentication)
serviceAccountAnnotations: {}

resources: {}
# requests:
# cpu: "100m"
# memory: "50Mi"
# limits:
# cpu: "300m"
# memory: "150Mi"
affinity: {}
# # This example pod anti-affinity forces the scheduler to put traefik pods
# # on nodes where no other traefik pods are scheduled.
# # It should be used when hostNetwork: true to prevent port conflicts
# podAntiAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# - labelSelector:
# matchExpressions:
# - key: app
# operator: In
# values:
# - {{ template "traefik.name" . }}
# topologyKey: failure-domain.beta.kubernetes.io/zone
nodeSelector: {}
tolerations: []

# Pods can have priority.
# Priority indicates the importance of a Pod relative to other Pods.
priorityClassName: ""

# Set the container security context
# To run the container with ports below 1024 this will need to be adjust to run as root
securityContext:
capabilities:
drop: [ALL]
readOnlyRootFilesystem: true
runAsGroup: 65532
runAsNonRoot: true
runAsUser: 65532

podSecurityContext:
fsGroup: 65532
submitted by bran-695 to Traefik [link] [comments]

binäre optionen template - YouTube Binäre Optionen: Top 18 Templates/Indicators for Binary Options or Forex ... Kostenlose Template Strategie Binäre Optionen Erklärung ... Binäre Optionen Template TradingView ️‍ Signal VDub ProV2 5-15 Minuten Iq Option System Binäre Optionen Bestes Iq Option Template Trading Geniale 92% Rendite ✔️ Martingale Tutorial✔️ Binary Options Template it works easy binäre optionen template kostenlos, binäre optionen ... Binäre Optionen: Template Trading einfach erklärt

Win Win Binary Options Template; Mak Binary Options Template; Binary Options Channel; Binary Options System; Binary Options Calculator; Binary Options Signals; Binary Options Trader; Binary Options Oscillator; Binary Options Buddy 2.0; Binary Options Arrow Indicator; HLOC Binary Options System; Binary Options Profit System ; Binary Options Master System; Simple Binary Options System; Binary ... Bevor wir uns dem Binäre Optionen Template Vergleich widmen, sollte generell die Frage beantwortet werden was denn überhaupt ein Template ist. Wie der Name es schon sagt, handelt es sich hierbei um eine Vorlage, welche man in den Metatrader integrieren kann. Diese Vorlagen oder auch Templates dienen der besseren Chartanalyse und können dabei helfen bessere Einstiegspunkte in einen Trade zu ... Binary Options Microsoft Word templates are ready to use and print. Download Binary Options Word templates designs today. Template library PoweredTemplate.com Binary option template. Dec 27, 2013 · Binary Options have an expiration time, and therefore cap your profits in two dimensions: price and time. These are drawn automatically and we only need to pay attention when an arrow appears. Creating a template on a live chart for a quotation chart, we used a 1-minute timeframe. In every marketplace there are major shifts due to innovation and ... Power Template Strategie. More Information. Milestone Connect. Warum Fast Alle Bei Binären Optionen Scheitern Binäre Optionen Anyoption Erfahrungen. At Binäre we help chemists providing the most innovative option for sample preparation for trace metal analysis and direct option analysis. Milestone has been active since in the field of binäre template preparation. We are the acknowledged ... Empire Option; OptionFair; Robots; Signals; Strategy; Scams; Forum; Trading Concepts: Creating a Trading Plan. Start a business, you need a plan. With no direction or planning for how you’ll make a profit, your business likely doomed. Trading is no different, if you want to succeed, you’ll need to think of trading like you would a business. After all, through your research, skills and ... Wenn man nach Strategien oder Handelsmehtoden für binäre Optionen sucht, stößt man schnell über viele sogenannte Templates zum Handel binärer Optionen, oft mit reißerischen namen wie Blue Power Tempalte, BOkay XY oder ähnliches!Einige davon gibt es kostenlos (Wie meines zum Beispiel – Siehe unten) andere können recht teuer werden (Hab schon welche für 1200 Euro gesehn) – Was ist ... 2,921 Binary Options Website Templates. Filters . Applied filters: Types: Website Templates × Clear. Sort by: Sorting Trending. Trending Bestsellers Newest Products Lowest Price Highest Price Top Rated. Monstroid2 Multipurpose Bootstrap Website Template by ZEMEZ. 1,601 Sales . $75 . Details Live Demo . Multipurpose Intense - #1 HTML Bootstrap Bootstrap Website Theme by ZEMEZ. 4,069 Sales ... Download a huge collection of Binary options strategies, trading systems and Binary Options indicators 100% Free. Get your download link now. BBand Stop Strategy is a 5 minute binary option trade strategy which uses BBand Stop alert indicator in MT4 to define ideal position to enter the trade. How to setup the chart Timeframe: M5 Template: BBand Stop Strategy (Download here: eDisk or UlozTo.Net) How does this strategy work Arrows (pointing up and down) will be displayed over/under […] Štítky BBand Stop strategie, binary options ...

[index] [10893] [21516] [18545] [16365] [14176] [7008] [24631] [21663] [2992] [15148]

binäre optionen template - YouTube

In diesem Video zeigen wir euch wie Ihr die Leftoak Trading Templates benutzen könnt und eine knappe Strategie die jeweils hinter den 4 Templates steckt. - E... Binary options Template Metatrader 4 Blaster 60 seconds 🤘 Binary options 😎 2017 😎 IqOption - Duration: 7:34. MyBinaereOptionen.com 84,275 views. 7:34. binäre optionen template kostenlos, binäre optionen signale kostenlos http://bitlye.com/7k7yHv Die Crypto Trader ist eine Gruppe, die ausschließlich Leute... Binäre Optionen Template Iq Option Template Reg: https://goo.gl/nqh8R8 Martingale Technik Geniale 92 % Rendite. Bei diesem Binäre Optionen Template kommt die Beste Rendite zustande. Dieser ... http://www.printmyatm.com/go/mt4indicators Like, Comment, and Subscribe! Top 18 Templates/Indicators for Binary Options or Forex Trading! [PMA Insider] https... Binäre Optionen Template jetzt starten https://goo.gl/o4ktMA Die besten Informationen zum Handel auf https://goo.gl/PgEihU Erfolgreich Binäre Optionen traden. Hier stelle ich euch die Template ... Binary Options Temlate it works easy with Binary Light and 24option : http://option.go2jump.org/SHJ702 . Commuity: http://www.binaryoption.social/ . Binary options ... Aktienrunde.de Binäre Optionen Vergleich: http://www.aktienrunde.de/binaere-opt... Aktienrunde.de Facebook-Gruppe: https://www.facebook.com/groups/aktie... Unsere Produktübersicht: binäre optionen template http://bitlye.com/7k7yHv Der Bitcoin Macht Menschen ReichUnd du könntest dich weiterentwickeln zum Nächsten Millionär. The Bitcoi...

http://binary-optiontrade.dateran.gq