Here's How You Can Create a Highly Profitable Website
© Rattanasak Khuentana | EyeEm | Getty Images Fundamentally a website is your most powerful tool because it’s capable of evolving as your business grows and because it’s a tireless worker. Nobody on the planet can work harder than a website because a website is available 365 days a year, 24/7. It doesn’t need to sleep, never gets sick and won’t burn out or require time off. CONSTELLATION BRANDS, INC. Yet, the website is the biggest missed opportunity for most businesses because the vast majority of the time they simply don’t work. When someone says their website “doesn’t do anything,” it pretty much comes down to one thing: sales. Imagine this scenario: You’re trying to talk to a salesperson, except this particular person only awkwardly stares at you. Whenever you try to ask a question, you only get a robotic response repeating the same couple of sentences with extremely vague information and a list of words. You don’t need to be an oracle to predict the future on this one: Nobody in their right mind is buying from this person. Communication is the most basic requirement for a sale to happen. Your website is a tool to communicate digitally. Most sites communicate like a creepy salesperson who can only repeat the same vague information. Fundamentally, it’s why the vast majority of businesses have websites that fail. Most businesses don’t know what to focus on, so they spend money in random directions hoping that it’ll “fix the website.” Would 10,000 more people talking to the creepy salesperson eventually result in a sale? They’d probably get a couple, but with a terrible close ratio and an expensive cost to get that many people in front of them. So where do you need to focus to make the website effective? 1. Understand that your website is never finished Websites have infinite possibilities for what they can do. The problem with that is that most of the time people try to overcomplicate and overthink their websites from the get-go. They’ll look at other competitors' websites, assume it’s working for them and want to copy or duplicate everything going on. This leads to people agonizing over small details, oftentimes for months on end, because the assumption is that “once it’s done, it’s done.” Don’t put that much pressure on yourself. Rather than thinking of a website as “done”, think of it as a perpetual work in progress that grows and evolves with you and your business. No design is perfect and there will always be flaws. Stop striving for perfection and instead adopt a mindset of testing and improving. Highly experienced web design companies will give you an amazing head start and let you start the race halfway to the finish line at a fraction of the cost it would take to figure out how to get there otherwise. Even with the head start, you still need to run the race to win. Too many times people hire designers thinking that that’ll solve everything all on its own, so they never actually “run the race.” Related: 3 Reasons Your Website Will Never Be Finished Website design and content is how a website communicates Most websites overcomplicate design, which makes communication ineffective and development costly and time-consuming. Start very simple. The core webpages you need are a very small set: home page, about page, product/services and contact. Keep design clean and clear. Way too many websites add way too many things on a single page. It’s kind of like an empty countertop, begging to be filled with clutter. Think about this: Does a kitchen full of stacked old dishes or a clean countertop look more appealing to a guest? Simpler designs also make your content creation needs easier to digest and tackle. Remember, the website is always a work in progress and you can always add more pages. Even in today’s media-rich world, writing is the primary way people consume content online. Writing is how your website “talks” to a customer. Bland and generic content makes your customers think you’re also bland and generic. Here’s an example: Rather than listing services and products with industry verbiage that your average customer doesn’t understand, illustrate the problems you solve and the process you implement. People buy based on whether or not the product or service can solve their problem. Features are only used to compare one service to another. By walking through the process they’ll go through as a customer and how each problem is solved in a way that they can understand, you give your customer a lot more confidence in your company because you “get them.” Related: How to Make Your Website Your Best Salesperson, and Not Your Worst Money Pit You must use analytics and know these two core stats Websites are ever-evolving and constantly changing. Once we’ve got our design out there, we need to see how effective the communication really is. The only way we do this is by knowing what our visitors are doing when they get to the website. There are plenty of free analytics tools available, with Google Analytics being the most popular. While there are tons of useful stats, the vast majority of our analysis comes down to two: average time on page (also called average engagement time) and users. When we’re looking at how to improve a website, the vast majority of the decisions are based on those two stats. Fortunately, you don’t need to be a tech wiz to know them. If you can log into analytics, they display in a graph on the first page. Average time on the page tells you how effective your website is at communicating. More time means you’re more effective. Situations vary, but generally, people spending less than 30 seconds means you really need to improve. People spending 1.5-2 minutes+ means you’re doing pretty good. If you don’t know these stats, you have no idea what the problem is with your website. Making random guesses at what you need to change just doesn’t often work out and it’s why the vast majority of businesses spend money on website improvements that don’t actually solve their problem. For example, if you buy a service like ads, SEO or leads and send them to your site, but people spend less than 10 seconds on the site, it doesn’t matter how much money you throw at those services (and in fact, more and more of SEO is based on how long people spend on your site). The reverse is also true. If you spend money on a redesign, and people spend on average five minutes on the site but only have two people visiting the site, then the redesign won’t help you solve your problem. What you need is more people seeing your site. Remember: Your website doesn’t need to be perfect from the get-go. You can change anything you want at any time you want and the goal needs to be steady and consistent improvement based on knowing what the actual problem is. All you need to do to win at the website game is to make frequent small and simple changes based on customer stats. Related: A Small-Business Guide to Google Analytics(Infographic)
How Website Indexing Works (And How To Make It Work Better)
By David Hunter, CEO of Epic Web Studios and ASAPmaps in Erie, PA. He also co-founded dbaPlatform, a local SEO software. getty Suppose you’ve just composed the most objectively useful, engaging and brilliant web content ever. Now suppose that content remained unseen and unheard of, never once appearing in search results. While that may seem unconscionable, it’s exactly why you cannot overlook website indexing. Search engines like Google love delivering the good stuff just as much as you love discovering it, but they cannot serve users results that haven't been indexed first. Search engines constantly add to their colossal libraries of indexed URLs by deploying scouts called “spiders,” or “web crawlers,” to find new content. How Web Crawlers Index Content Even for spiders, the web is a lot to navigate, so they rely on links to guide their way, pointing them from page to page. In particular, they’ve got their eyes on new URLs, sites that have undergone changes and dead links. As the web crawlers come across new or recently altered pages, they render it out much like a web browser would, seeing what you see. However, whereas you might skim over the content quickly for the information you need, the crawlers are much more thorough. They scale the page up and down, creating an index entry for every unique word. Thus it’s possible that a single web page could be referenced in hundreds (if not thousands) of index entries! Getting To Know Your Crawlers At any given time, there may be hundreds of different spiders crawling the internet, some good and some bad (e.g., those looking to scrape email directories or collect private information for spamming purposes). But there are a handful you want to be particularly aware of. • Googlebot (Google) • Bingbot (Bing) • Slurp (Yahoo) • Facebot (Facebook external links) • Alexa crawler (aka ia_archiver, for Amazon’s Alexa) Give Crawlers Guidelines With Robots.txt And Meta Directives There may be situations where you do not want certain pages indexed, such as: • Those that would not make quality landing pages from search (e.g., a “thank you” page for form submissions, a promo code reveal page) • Those intended for internal use only (testing or staging purposes) • Those containing private or personal information What’s more, Googlebot and other prominent spiders have crawl budgets built into their programming — they’ll only crawl so many URLs on your site before moving on (although it should be noted that crawl budgets are massive compared to what they once were). So as a site administrator, not only do you want to lay down some rules, you also want to set some priorities (crawl budget optimization). There are two primary ways you can do this: robots.txt files and meta directives. Robots.txt A robots.txt file tells web crawlers where they should and should not go on your website — although not all of them will listen. To access it, just add /robots.txt to the end of your URL (if nothing pops up, you don’t have one). The basic syntax of a robots.txt instruction is very simple: 1. User-agent: [insert name of the user-agent (i.e., the crawler/spider/bot you want to call out here — if you want to call out all of them, leave an asterisk *)] 2. Disallow: [insert the URL string you’d rather the crawler not visit — a standalone backslash can be used to tell certain spiders not to crawl your site at all] “Disallow” is the most common instruction you’ll give in robots.txt, but you can also suggest a “Crawl-Delay” (number of milliseconds you want the crawler to wait before loading the specified URL), “Allow” an exception within a disallowed URL string (Googlebot only) or submit an XML “Sitemap” containing your website’s most crucial URLs — a key to crawl budget optimization. Meta Directives Robot meta directives (a.k.a. meta tags) tell web crawlers what they can and cannot do in regard to indexing — although, again, malicious bots may disregard. Because it is written into the code of a web page, it’s more a demand than a suggestion. Using various parameters, website administrators can finetune whether or not (or for how long) a page is indexed, whether its links are followed, whether a search engine can pull snippets and more. Is Your Site Getting Indexed? These are the most common reasons why your site might not be getting indexed: • Your robots.txt file or meta tags are blocking the crawlers. • It’s brand new — for example, Googlebot can take anywhere from weeks to months to index a new site, depending on the size. • It’s not linked to from anywhere else on the web. • The site’s navigation is difficult to follow. • Your site has been flagged for black hat SEO tactics. How To Make Your Website More Crawlable Here are some ways to make indexing work better for your site. Get organized. Since links are the crawler’s primary mode of transit, ensure your site has clear navigation pathways. If you want something to be indexed, it absolutely must be linked to from somewhere else on the site — at a bare minimum the main navigation menu, but ideally from other relevant, related pages throughout the site. Reduce obstacles. Do your best to not hide important content behind logins, forms and surveys. Crawlers cannot read text inside of images, videos and GIFs — so be sure to apply alt text to media. Navigation menus not written in HTML (e.g., JavaScript) are also not visible to spiders. Submit a sitemap. Link your sitemap in the robots.txt file, and submit through Google Search Console. From the Search Console control panel, site owners can get very specific about how they want Googlebot to crawl their pages. Depending on the size of your website, you can have your CMS generate your sitemap for you, do it manually or have it done automatically using third-party software. How To Check Indexed Pages To see the pages Google has already indexed, simply query “site:[your domain name]” — this will generate a complete list in search results. It’s a good way to see if there’s anything important missing — or anything unnecessary. Check up on it every so often after changes are made to ensure Google is seeing exactly what you want it to see.
Welcome to the AWS CodeStar sample static HTML website
This sample code helps get you started with a simple static HTML website deployed by AWS CodeDeploy and AWS CloudFormation to an Amazon EC2 instance. What's Here This sample includes: README.md - this file appspec.yml - this file is used by AWS CodeDeploy when deploying the website to EC2 scripts/ - this directory contains scripts used by AWS CodeDeploy when installing and deploying your website on the Amazon EC2 instance webpage/ - this directory contains static web assets used by your website index.html - this file contains the sample website template.yml - this file contains the description of AWS resources used by AWS CloudFormation to deploy your infrastructure template-configuration.json - this file contains the project ARN with placeholders used for tagging resources with the project ID Getting Started These directions assume you want to develop on your local computer, and not from the Amazon EC2 instance itself. To work on the sample code, you'll need to clone your project's repository to your local computer. If you haven't, do that first. You can find instructions in the AWS CodeStar user guide. Open index.html from your cloned repository in a web browser to view your website. You can also view your website on the AWS CodeStar project dashboard under Application endpoints. What Do I Do Next? You can start making changes to the sample static HTML website. We suggest making a small change to /webpage/index.html first, so you can see how changes pushed to your project's repository are automatically picked up by your project pipeline and deployed to the Amazon EC2 instance. (You can watch the progress on your project dashboard.) Once you've seen how that works, start developing your own code, and have fun! Learn more about AWS CodeStar by reading the user guide. Ask questions or make suggestions on our forum. User Guide: https://docs.aws.amazon.com/codestar/latest/userguide/welcome.html Forum: https://forums.aws.amazon.com/forum.jspa?forumID=248 How Do I Add Template Resources to My Project? To add AWS resources to your project, you'll need to edit the template.yml file in your project's repository. You may also need to modify permissions for your project's worker roles. After you push the template change, AWS CodeStar and AWS CloudFormation provision the resources for you. See the AWS CodeStar user guide for instructions to modify your template: https://docs.aws.amazon.com/codestar/latest/userguide/how-to-change-project.html#customize-project-template What Should I Do Before Running My Project in Production? AWS recommends you review the security best practices recommended by the framework author of your selected sample application before running it in production. You should also regularly review and apply any available patches or associated security advisories for dependencies used within your application. Best Practices: https://docs.aws.amazon.com/codestar/latest/userguide/best-practices.html?icmpid=docs_acs_rm_sec
