Home
Blog
About
Donate

CS261

Software Engineering (wah)

Contents

Cheat Sheet of Essential Definitions
  1. Software Development Methodologies
  2. Requirements Analysis
  3. Team Organisation
  4. System Design
  5. Project Management
  6. Implementation
  7. Human Computer Interaction
  8. Dependability
  9. Testing

Software Development Methodologies

Introduction

  1. Plan-Driven
  2. Software Spec
  3. Agile
  • Different Systems need different processes.
  • All processes involve some sort of specification, design and implementation, testing, and evolution (maintaining)
  • Two main types of development: plan driven and agile
  • Plan driven: everything is planned and fixed in advance. Inflexible.
  • Agile: Incremental planning, more adaptable to change.

Plan-Driven

Waterfall Model

  • Invented in 1970, strictest of all plan driven models.
  • If a change is required, the waterfall model must restart. Incredibly inflexible and in practice not completely followed.
  • Requirements: most customer focused, involves identification of resources, distribution of work.
  • Design: design document generated, should be really detailed so implementation is not hard.
  • Implementation: only when design doc is finished. Everything written should be unit tested.
  • Verification/Integration: Most group focused - putting all parts of system together, making sure they work.
  • Maintenance: Hand over program and documentation. Offer maintenance, which is also done via waterfall.
  • Waterfall is good when requirements are understood and will not change. Few constraints on location and teams size (development distributed and isolated) and each component can be first tested in isolation.
  • Waterfall is not good as the client has to wait a long time for results, and changes are difficult to accommodate, and there is a problem of software tech deprecation over the entire years-long development timespan.

Inremental Development

  • A more flexible system than waterfall.
  • Each iteration is still planned like waterfall, spec updated between iterations (not rewritten).
  • (+) cost of accomodating change much reduced, and user gets software quicker, feedback is easier to get -- better (perceived) value for money
  • (+) user inclustion in acceptance testing, can even install system before the final version.
  • (-) very difficult to estimate overall cost of development of such a system.
  • (-) difficult to maintain consistency between versions - poor design choices early on hamper later feature additions. Spaghetti code.

Reuse-Oriented Sofware Engineering

  • Why reinvent the wheel? Instead, many devs rely on "off-the-shelf" / open source premade components, and just writes the glue code to tack them together. Common off the shelf (COTS) systems.
  • Compromises on features with client has to be made, but tradeoff is that program is banged out in record time.

Software Spec

  • To understand and define what services are required
  • To identify limits in feasibility - "requirements engineering", producing requirements document.

Agile

  • Agile development is about rapid development: interleave spec, design, and implementation, and develop the system as a series of evolving prototypes.
  • Focus on code over design, develop as you go. Aim for speed, and flexibility.
  • Often has the short stand up meeting concept.
  • Major principles of Agile are
    • Customer involvement - cannot respond rapidly to changes without rapid feedback
    • Incremental delivery - have prototypes, and update spec for next iterations
    • People not process - have highly skilled coders that know what they're doing. Share knowledge and improve processes
    • Embrace change - open to additions, and design system to accommodate change (hard)
    • Maintain simplicity - since there is a lack of good doucmentation, software must be simple and easy to understand for new members - "self commenting code" (lofty ideals)
  • Most companies spend more money on miantenance than actual development. Since agile prioritised development over documentation, this can be difficult to pick up and maintain later on.
  • Very flexible to requirements changes provided it's the original team doing it. Team losses hit harder in agile.
  • It is possible to mix plan-based and agile and pick and choose.

Extreme Programming

  • Incremental delivery with fast iterations. Automated tests to verify builds.
  • Code refactored constantly to maintain simplicity. Strong customer involvement, deliveries every few weeks.
  • Impractical if customer slow or hard to reach.
  • Incremental planning: requirements on "story cards", which are selected for inclusion based on priority.
  • Small releases: minimum functionality for release, with more stuff for future releases.
  • Simple design: only enough design to meet customer requirements, maintaining expandability - this is HARD
  • Test-driven development: write the tests for the feature before writing the feature to match the tests.
  • Refactoring: constantly refactor to improve code
  • Pair programming: work in pairs, one coding and the other checking and providing support. regularly swap.
  • Collective ownership: more than 2 people responsible for any one part of the codebase.
  • Continuous integration: integrate as soon as feature done
  • But also Sustainiable pace: avoid large amounts of overtime and overwork.
  • Onsite customer: have a customer rep on site for minimal response delay.
  • Extreme programming is very agile, but has the drawbacks of it too. Best suited for small, experienced teams.

Scrum

  • General method focused on iterative process, with three stages
    1. Planning stage - general goals
    2. Sprint cycle - each cycle is an implementation, 2-4 weeks but it varies
    3. Project closure
  • There are daily meetings for progress.
  • Select the features needed with the customer, but build in isolation. The scrum master (i.e. team leader) interfaces between team and customer
  • Work is reviewed and presented at end of sprint cycle.

Requirements Analysis

  • Requirements are descriptions of what program should and shouldn't do. This enables devs to fulfil customer needs, and provides a basis for tests, requirements, and analysis.
  • 2 parts: what is going to be built, and how is it going to be built.
  • Requirements bridge customer & developer, should be customer understandable, or at least
  • Have two requiremetns docs, a "C-facing" (customer facing) and a "D-facing" (dev facing), with differing amounts of technical detail. C-facing is usually written first. Crucially, there are no differences in requirements between the two.
  • Requirements should be specific and measureable. Not vague. Be aware of changing requirements.
  • C-facing reqs D-facing reqs
    • System from user view
    • how it works in natural language
    • diagrams are always nice
    • list of constraints in operation
    • Detailed descriptions of functionality
    • Language, service, protocols, libraries, etc
    • Defines exactly waht must be implemented
  • The whole req doc must be: prioritised, consistent, modifiable, traceable (i.e. know where req came from, justification)
  • Each requirement must be: correct, feasible, necessary, unambiguous, verifiable
  • The MoSCoW order of priority is often used: Must, Should, Could, Won't, but there is an argument that if a requirement is "Won't", don't put it there in the first place?
  • Requirements elicitation requires interaction with stakeholders, and gather information about the project. Think through the conficts of interest. Then, get clarifications, go through 'em with a fine-toothed comb, and finally write down the document after finalising.

Team Organisation

  • The project manager makes sure everything is running smoothly and on time.
    • They are arguably the most stressful role, and is often the least technical and most people-oriented role.
    • They must track progress and help solve stuck situations, plan the project development.
    • They must manage the team, chase people up about work, etc.
    • And consider risks (to development or to team) and mitigations -- risk assessments. Considering when to sacrifice features if necessary.
  • The business analyst looks at organisational context of the project.
    • They identify stakeholders, activities, processes, etc
    • And understand the stakeholders' requirements. They do the requirements elicitation and documentation. Make sure everything is traceable and justified.
  • These guys must also: Review the test plan -- a good test plan can identify mistakes even before development, such as incompatibilities between modules. Bugs and defects should be triaged and prioritised.
  • Supervise project installation, deal with the "day 1 live trauma", and hand over software, and perhaps manage maintenance and support.
  • Reflect over what did well, what did poorly when closing down project. Archive and seal documentation.
  • Reward and recognise people for their achievements.
  • Throughout, accountability for code is important. Use version control, git blame exists for a reason. Git is good.

System Design

Contents

  1. System Modelling
  2. UML in more detail

System Modelling

  • System design is supplemented by mathematical/logical system modelling diagrams, which
  • clarify fuctionality, provide a basis for development, and inform design approaches and component level decisions.
  • UML - Unified Modelling Language - is a set of formal representations which help with the 4 "perspectives":
    • External -- context of system, interaction systems
    • Interaction -- how people interact with the software, what is accessed by who, what is internal
    • Structural -- layout, core features, class organisatiom
    • Behavioural -- dynamic behaviour, algorithms and processes reacting to external or internal interactions.
  • UML has two subsections (views): static / structural; and behavioural.
  • Within these views are different types of diagrams, like class diagrams, sequence diagrams, use case diagrams, activity ", state machines, sequence ".

UML in more detail

Class diagrams (static): most common type of diagram, shows the entites within a system and their relations. Obviously most suited to OOP.

  • Entities can be identified through
    • A grammatical approach, where they are extracted from a description of requirements
    • Tangible things in application domain
    • Behavioural approach, thinking about how entities will interact
    • Scenario-based, objects and methods from scenarios

Activity Diagrams (behaviour): kinda like a flowchart really. Note that decision doesn't have anything written inside it.

Use case diagrams: how different events interact with other events.

Sequence Diagrams: to show temporal interaction -- how a system's interactions go over time.

  • Participants are objects or entites.
  • Each diagram always starts with a call out arrow, showing external prompting. Messages passed are shown in arrows. Time is not to scale.
  • Labels take the form of name:Object for a named object, :Object for an anonymous object, and name for a named unknown class.
  • If an object calls itself, we need to nest a bar inside a bar. Loops and ifs are done with labelled boxes.
  • Don't model a whole system on them - they're ugly and ungainly, best used for subsystems.

Project Management

Contents

  1. Why Projects Fail
  2. Risk Management
  3. Project Management

Why Projects Fail

Very few projects succeed, why is this so? Well, there are a multitude of reasons, amongst them poor planning, requirements changing too much, high turnover, unrealistic deadlines, poor testing, and so on and so forth.

Project management is essential to making sure constraints are kept, to

  1. Deliver software on time
  2. Keep costs within budget
  3. Deliver software that meets expectations
  4. Maintain morale and productivity of team

Team success depends on three generic factors:

  1. People: a mix of people with different motivations and skillsets
  2. Organisation: individuals must be given opportunity to contribute
  3. Communication: technical and managerial communication is essential
And four people factors:
  1. Consistency: not making people feel undervalued
  2. Respect: everyone has equal opportunity to contribute
  3. Inclusion: all views should be considered (regardless of heirarchy)
  4. Honesty: faking it will backfire unless you actually make it

People are motivated through satisfaction of their needs (something something heirarchy of needs).

Heirarchy is still important. Should the PM be the Tech lead? Or should it be someone else? Who will interact with stakeholders? How do we integrate people who are not in the same location? How can knowledge be shared?

Group organisation can be informal or heirarchical:

  • Informal: No strict heirarchy, decisions made by consensus. Can be successful if griup is highly competent.
  • Heirarchical: Defined leaders and management levels. Can work well in breaking down and delegating subproblems. Best when responsibilities are clear.

A cohesive team can establish their own quality standards, and actually follow them. Individuals will learn from and support each other, and people tend to work better.

Risk Management

Identification

Risks can be grouped into what areas they affect. Project risks affect the schedule or resources of the entire project. Product risks affect the final quality of the product. Business risks affect the organisation. Some risks can fall into multiple categories.

Project Risks include staff turnover, management change, hardware unavailability, requirements change, etc.

Prduct Risks include tool/library underperformance, the aforementioned requirements change, specification delays, size/complexity underestimates, etc.

Business Risks include technology changes and deprecation, product competition, etc.

There can be even finer category groups: such as Technology Risks, People Risks, Organisational Risks, Tool Risks, Requirements Risks and Estimation Risks.

Analysis

Consider each risk and its severity, this can then be grouped and prioritised

For example, you could have a rating Insignificant/Tolerable/Serious/Catastrophic.

Contingency

Once you have a prioritised risk list, a contingency plan for each risk must be made. First is avoidance: aim to reduce the chance of the risk even becoming reality, then comes minimisation: reducing damage if it goes wrong, and finally contingency plans: what to do if risk does occur.

This all goes into a risk assessment / risk register, which can be a docuemnt, or on a management platform, but somewhere accessible to management.

Project Management

The planning documents of a project should communicate all ideas, contingencies, organisation, etc to both the developers and the stakeholder.

Planning has three stages: (1) Proposal / pitch / bidding phase, (2) startup phase, and (3) periodic planning.

Scheduling is done through Gantt Charts, and critical path analysis algorithms for scheduing which tasks go first.

Estimation of costs and schedule however is really very easier said than done,

  • It often comes down to experience as to how to schedule a project correctly,
  • Or using some sort of algorithm to guesstimate the schedule.

Success is measured against how well the project meets the spec and existing expectations.

Implementation

Contents

  1. Design Patterns
  2. Creational Patterns
  3. Structural Patterns
  4. Behavioural Patterns
  5. SOLID

Design Patterns

  • Design Patterns are solutions to common programming problems. They're modular blocks designed to make code more flexible, a design structure that achieves a purpose. You see them a lot in OOP, where most enterprise code is.
  • Design patterns all have 4 aspects: a meaninngful name, a description of the problem to solve, the solution, and a statement of the drawbacks.
  • They are generic blueprints, and it takes experience to know when to use them in the correct situations.

Creational Patterns

Factories

  • Creational Pattrns help with reducing the tedium of creating objects.
  • Imagine a bicycle race:
    class Race {
    	public Race createRace() {
    		Frame frame1 = new Frame();
    		Wheel frontWheel1 = new Wheel();
    		Wheel rearWheel1 = new Wheel();
    		Bike bike1 = new Bike(frame1, frontWheel1, rearWheel1);
    		// repeat for every single other bike in the race...
    	}
  • This is bloody tedius. Plus if we were to extend from race:
    class TourDeMartinique extends Race {
    	public Race createRace() {
    		// we need regulation bikes
    		// so we have to go through the whole rigamarole again of bike creation...
    		Frame frame1 = new RegulationFrameEx007();
    		Wheel frontWheel1 = new RegulationWheel1800F();
    		Wheel rearWheel1 = new RegulationWheel1800F();
    		Bike bike1 = new Bike(frame1, frontWheel1, rearWheel1);
    		// ...
    	}
    it makes things even worse. Worse still, if we need to change something, then we would have to update the whole thing, which is error prone.
  • Instead, let's have methods which create objects for us, so we can have object creation all in one place -- this is the factory method
    class TourDeMartinique extends Race {
    	Frame createFrame() { return new RegulationFrameEx007(); }
    	Wheel createWheel() { return new RegulationWheel1800F(); }
    	Bike createBike(Frame frame, Wheel front, Wheel back) {
    		return new Bike(frame, front, back);
    	}
    }
    especially helpful if these common method signatures are implemented in the Race class.
  • We could even pull this into its own class - a factory class.
    class BicycleFactory {
    	Frame createFrame() { ... }
    	Wheel createWheel() { ... }
    	Bike createBike(Frame frame, Wheel front, Wheel back) { ... }
    	Bike createDefaultBike() { ... }
    	// etc.
    }
  • It's a way to get around the limitations of statically typed OOP constructors.
  • Advantages:
    • Cutting down on repeated code
    • Adding new variations, scenarios is easier
    • Making changes is easier
    • Easier to test
  • Disadvantages:
    • Lots of boilerplate classes
    • Factory is linked to its produced class, thus when we update that class, we must update all the factories -- still some sort of cascading update.

Builders

  • When an object has many attributes, especially when you want to just forget about some of them, writing constructors is hard. We'd need to consider all the variations of what we want, what constructors we need, and if there are like two dozen different attributes, wah.
  • Builders are the pattern to help with this issue. Builders abstract a constructor into a series of substeps, each of which "builds" an individual component, and the object is created with a final build call.
    abstract class HouseBuilder {
    	abstract void buildWindows();
    	abstract void buildDoors();
    	abstract void buildWalls(); 
    	abstract void buildRooms();
    }
  • Builders are not factories, they are more flexible, and designed for large classes with many optional parameters. Their goal is to avoid long tedius constructors.
  • (Note: Lombok for java has the annotation @Builder)
  • Advantages:
    • More control over construction
    • Can reuse construction code for different instances
    • Single responsibility principle one bit of code responsible for one thing. One place deals with construction.
  • Disadvantages:
    • Like factories, needs large number of new classes and boilerplate
    • Code becomes longer, construction still complex, just modular now!

Prototypes

  • Another object construction method, where we create a prototype object and then clone it. e.g.
    class Bike {
    	Object clone() { ... }
        // ...
    }
    class Race {
    	Bike prototye;
    
    	public Race(Bike prototype) {
    		this.prototype = prototype;
    	}
    
    	public Race createRace() {
    		Bike b1 = (Bike) prototype.clone();
    		//...
    	}
    }
  • Advantages:
    • Don't need to make another subclass just to create an object
    • Remove heavy initialisation for cloning
    • Produce complex objects easily
    • Keep class heirarchy simple
  • Disadvantages:
    • Circular references are difficult
    • Might still have to do heavy update code on cloned objects

Structural Patterns

Proxy Pattern

  • The proxy pattern allows us to create placeholders for other objects.
  • Reference an entity without having to load the entire thing (such as in previews)
  • Used for anything which needs a "load on demand"
  • public interface Graphic {
    	void draw();
    }
    
    public class ImageProxy implements Graphic {
    	private String fileName;
    	private Image content;
    
    	public ImageProxy(String fileName) {
    		this.fileName = fileName;
    		content = null;
    	}
    
    	public void draw() {
    		// only load the content when it is needed 
    		if (content == null) content = new Image(fileName);
    		// the actual image class will have a draw function
    		content.draw();
    	}
    }
  • There are many different types of proxy, like:
  • Virtual Proxy (lazy initialisation): for something that is resource heavy, put off loading until last minute.
  • Protection Proxy: provides access control to object.
  • Remote Proxy: offers functionality which is off-site, and handles all networking.
  • Logging Proxy: to keep track of accesses and requests on the side.
  • Caching Proxy: save contents/results of object for a short time, useful if object is computationally or networkly intensive.
  • Smart Referencing: essentially garbage collection.
  • Advantages:
    • Can hide away parts of service object
    • Allows to manage object life cycle
    • Proxy provides availability even if object not available
    • New proxies can be made without changing service
  • Disadvantages:
    • Added complexity
    • Adds another step in getting response -- overhead concerns

Decorator Pattern

  • Suppose we want to have an object that does multiple things. Say, we have a message bot, that needs to send to several different platforms at once. Well, given a base Message class, for each platform we would have to subcass: FacebookMessage, TwitterMessage, DiscordMessage, etc, etc.
  • But what if we want to send to multiple at once? Then we'd need to do all the combinations and ...
  • Wah.
  • Decorators are a way around this, by wrapping objects so they can have dynamic behaviour at runtime. They're the @Data type things you see in Java.
  • A wrapper is an outside "packaging class" that has all the functionality of the inner class, but will do some extra logic before calling original methods.
  • At runtime, we can now check what options the client has picked, and wrap our message object in all the necessary decorators.
  • Decorators and Proxies are very similar in method, but are there for different things.
  • Advantages:
    • Extend behaviour without adding several subclasses
    • Responsibilities become dynamic at runtime
    • Combinable, unlike subclasses
    • Promotes single responsibility
  • Disadvantages:
    • Removing wrappers later is difficult
    • Hard to implement in a way that isn't order dependent
    • Initial code layout can look messy - having used Spring... can bloody confirm.

Adaptor

  • As name implies, allows output from one object to be used by another.
  • If we update a class that is used everywhere, we would need to make sweeping changes across entire code base, which is really error prone and hard to do. Alternatively, stick an adaptor between the object and everything else.
  • Advantages:
    • Promotes single responsibility
    • New adaptors can be introduced without heavy refactoring
  • Disadvantages:
    • Increased code complexity
    • Depending on size of codebase, converting original object might just be easier

Flyweight

  • The above is all about wrapping objects to add more functionality.
  • What if we don't want to do that, rather just rearrange the objects so they're nicer?
  • The flyweight pattern is a design pattern that allows us to get more objects in memory. They work best when objects share common properties which are also huge in size.
  • Essentially: find all the data that is the same over a bunch of objects, and just extract that into a static class.
  • Suppose we have this NPC:

    class Orc {
        String name;
    	int health;
    	Weapon weapon;
    	NPCAI style;
    	Map texture;
    
    	// methods...
    }

    But the texture map is huge, because this is a really detailed game. If every orc has its own copy, we can't store many of them. But this texture is common to all orcs (as well as a few other things), so might as well split this into

    class OrcData {
    	static NPCAI style;
    	static Map texture;
    	static Weapon weapon;
    }
    
    class Orc {
    	String name;
    	int health;
    	OrcData data;
    }

    And so now we only store one copy of OrcData.

  • Advantages: Saves memory, potentially drastically
  • Disadvantages:
    • Some data may need to be recalculated every call -- saving memory for increased compute time
    • Complex code

Behavioural Patterns

  • Concerned with how objects communicate. Majority of object's behaviour is communication.
  • Object can eithe change its own internal state, or interact by passing data to another object. The latter is what we care about.

Iterator Patterns

  • I think we've all used these? All default java data structures implement Iterator so we can do for (Object i : listOfObjects) { ... }
  • Advantages:
    • Single responsibility principle
    • New iterators can be introduced without heavy redesigns
    • Iterate multiple ways in parallel
    • Can even pause iteration and carry on later
  • Disadvantages:
    • Not always necessary
    • Less effective for highly specialised objects

Observer patterns

  • Allows an object's dependents to be automatically notified of a change.
  • This can work as a push model (sender sends) or a pull model (receiver periodically asks).
  • Often referred to as Producer/Consumer or Publisher/Subscriber.
  • On the producer side, subscribers should be able to be added and removed from notification lists. Can have a different notification method for each type of notification publisher sends. Then, when the event occurs, iterate through all subscribed and send.
  • This way removes the need for subscribers to constantly check - busy waiting is bad and is a waste of resources.
  • Subscriber lists must be opt-in.
  • By maintaining lists we reduce the number of subscribers that need to receive data and do not bother the rest of the system unnecessarily.
  • Advantages:
    • New subs can be added without needing to redesign publisher
    • Relationships between objects can change at runtime
  • Disadvantages:
    • Subscribers notified in possibly random order

Memento Pattern

  • Save and Restore objects without revealing details of implementation.
  • Especially useful in the event we need undos.
  • The idea is the object can implement a method that makes a snapshot of itself, which is a limited interface that can then be stored in a caretaker class. This way, we don't violate encapsulation and expose all the private innards of an object.
  • The example given is an art program

    public class Canvas {
    	private int[][] colours;
    
    	public Canvas(int x, int y) {
    		colours = new int[x][y];
    	}
    
    	public void setPixel(int x, int y, int col) {
    		colours[x][y] = col;
    	}
    
    	public Snapshot makeSnapshot() {
    		return new Snapshot(this, colours);
    	}
    }
    public class Snapshot {
    	private Canvas canvas;
    	private int[][] colours;
    
    	public Snapshot(Canvas canvas, int[][] colours) {
    		this.canvas = canvas;  // maintain refererence to existing canvas object
    		this.colours = colours;
    	}
    
    	public void restore() {
    		for (int i = 0; i < colours.length; i++) {
    			for (int j = 0; j < colours[0].length; j++) {
    				canvas.setPixel(i, j, colours[i][j]);
    			}
    		}
    	}
    }
    
    public class Caretaker {
    	List<Snapshot> history;
    	
    	public Caretaker() {
    		history = new ArrayList<Snapshot>();
    	}
    
    	public void addEntry(Snapshot s) {
    		history.add(s);
    	}
    
    	public void restore() {
    		Snapshot s = history.get(history.size()-1);
    		s.restore();
    	}
    }
  • Advantages:
    • Making backups without violating encapsulation
    • By extracting out maintenance and restoration, we keep original object as simple as possible
  • Disadvantages:
    • Heavy memory cost, managing mementos
    • Need caretakers to track life cycles, more classes
    • Dynamic langauges can't actually guarantee state is preserved.

Strategy Pattern

  • To select the method to complete a task dynamically at runtime.
  • Say we have an object completing a problem, and we want it to choose on its own how to apprach said problem.
  • We have several new classes called strategies, each of which is a different solving approach. The original object selects from and uses these classes, and becomes a context object.
  • Advantages:
    • Can swap implementation at runtime
    • separating implementation details of an algorithm from the code that is just running it
    • Simplifies class heirarchy -- composition replacing inheritance
  • Disadvantages:
    • Unnecessary when there's only few choices
    • Requires clients to understand differences between strategies
    • Anonymous functions are making this obsolete

SOLID

  • These patterns help achieve SOLID principles of good code design, those being
  • Single Responsibility: each class does one thing
  • Open/Closed: open to extension closed to modification - once we finish a class, don't modify it again - extend it
  • Liskov Substitution: an object that uses a parent class can use its child classes without knowing -- child class can pretend to be parent class and serve all functionality of parent class
  • Interface segregation: many specific interfaces better than one generic interface -- no code should depend on methods it doesn't use
  • Dependency Inversion: ensure high level classes do not rely on functions from low level classes

Human Computer Interaction

Introduction

  1. Attention
  2. Memory
  3. Cognition
  4. Affordances

  • Essentially talking about the interface between the user and the system. The success of a product is determined by who uses it, and so making a program usable is preferable.
  • We have Nielsen's usability principles.
  • As technology improves, interactions become more natural and intutitive - imagine DOS vs a Windows 7 GUI. Humans have natural ways of interaction, and trending towards that natural way is a good thing.

Attention

  • Attention is the process of selectively concentrating on certain things whilst ignoring others.
  • Attention can be forced, divided, or can be gained involuntarily/unconsciously.
  • There are 4 types:
    • Selective Attention - when we tune out things to focus on specific things
    • Sustained Attention - the basis for attention span, how long we can focus on one task
    • Divided Attention - Can focus on multiple things at once, but this gets harder as tasks get more complex
    • Executive Attention - more organised version of sustained attention - keep track of steps and have clear goals
  • We want to design a system to draw attention to what is important.
  • We also want suitable metaphors so that even on new systems users intuitively know what certain things do - dragging a file to a bin icon will put it in the recycling bin.
  • Need to consider the context of the application, and how that relates to attention span.
  • If we want people to constant use our app and make ad revenue, then we need to sustain their attention (TikTok, granted most social media. If people are going to be using the app in a risky situation, then plan for divided attention (like a GPS system). If the app is used for tasks which take time, show progress.
  • People's attention span are shrinking, but we still need to keep people focused. Thus,
    • Keep user's goals in mind -- don't clog the interface
    • Switch up visuals / presentation often
    • Make it intuitive, then less focus needed
    • Use attention grabbing techniques sparingly (ha)

Memory

  • Not computer memory -- user memory. That thing that's like a sieve.
  • Memory has three main components - sensory stores hold information before it enters working memory, working memory is short term memory, and long term memory.
  • Short term memory is a temporary scratchpad that decays very quickly. Can only really hold around 4/5 things at once. Works best in chunking information.
  • Long term memory has episodic memory: knowledge of events and experiences, chronologically; and semantic memory: knowledge of facts, concepts and skills. Often derives from the former. Long term memory also decay, but slower.
  • Design for short term memory - if memory load is light, interactions can be quicker and smoother.
  • No need for a user to make mistakes and stumble their way through a user interface until they finally internalise it.
  • Can use colour cues to differentiate items, and the lack of space in short term memory is another reason to keep UI sparse.
  • Eventually interaction will enter long term memory. Keeping your interface similar between products helps in existing users' ease of navigation.
  • Emotions stimulate memory, generating emotions helps memory. However, there is a downside, since generating negative emotions (like the windows 10 settings application) is a good way to make a user hate your UI even more.
  • Users also remember better what they have done themselves -- passiveness -> forgetfulness.
  • Cannot fully rely on short term, some things require long term, and commiting to long term memory requires explicit effort, which we want to minimise. Using familiar images, consistent design and conventions helps making things easier.
  • This relates to icon design, where we want to use existing known icons for things (like a bell or envelope for notifications). Icons can be designed in 4 ways - resemblance; exemplar (using an example, like knife and fork for restaurant); symbolic (abstraction, a glass with a crack is fragile); or arbitrary (like the radioactive symbol).

Cognition

Norman's Action Cycle

  • Cognition includes understanding, remembering, reasoning, attending, ideas, and skills.
  • Psychologist Donald Norman created a human action cycle to describe actions people take when interacting with systems. We can use it to evaluate how good our UI is.
    • Form a goal - user first decides what they want to do
    • Intention to act - user makes intention explicit
    • Plannign to act - user chooses action from list of possible actions
    • Execution - user does the interaction
    • Feedback - user receives feedback from the world / system
    • Interpret feedback - user interprets the feedback based on what they expected to happen
    • Evaluate - user determines if they have been successful and are now closer to their goal
  • Using the cycle to evaluate focuses on two aspects: the Gulf of Evaluation and the Gulf of Execution.
  • The former is the gap which must be corssed to interpret a UI. If outcomes are not expected, then the UI may be difficult to interpret.
  • The latter is the gap between wanting to complete an action and doing that action. Effectively how long / many steps it takes for the user to complete their actions, which should be small for common actions. Powerusers can make use of macros or shortcuts or something.
  • From the action cycle we can also get 4 principles of design:
    • Provide visibility -- ensure user can easily understand current state of operations
    • Provide a good conceptual model -- ensure outcomes and results are consistently presented
    • Provide good mappings -- ensure easy to determine what the outcome of any action will be
    • Provide feedback -- ensure user gets constant and consistent feedback on results

Gestalt Principles

  • Some more psychologists blah blah blah. Patterns and rules on how humans will perceive sensory information.
  • 1: Figure Ground Principle: People segment their vision into "figure" and "ground" (i.e. fore and background). The figure gets the focus and is perceived as in front.
  • 2: Similarity Principle: Form informs function. If two things look similar, they'll probably behave similar. Can make use of colours and shapes in UI.
  • 3: Proximity Princple: If objects are grouped together, they're probably related. Proximity overrides similarity.
  • 4: Common Region Principle: Following from proximity, we group objects together in the same closed region -- what borders do. A UI will have several boxes and menus.
  • 5: Continuity Principle: If a series of objects are in a line or a curve, we think they are related. Curves and lines are better because we pattern match them quickly.
  • 6: Closure Principle: If we see a complex arrangement of shapes, likely to form them into a single pattern and fill in the blanks. ඞ
  • 7: Focal Point Principle: When we look at an image, we are drawn to the thing that stands out the most. Think about the big, bold, green-on-white ACCEPT COOKIES and then the small, inconspicuous reject non-essential cookies on those popups.

Affordances

  • Perception is where information is detected rather than constructed. Affordances are what an object allows us to do. A door affords opening it.
  • We must make affordances as clear to the user as possible, to make more intuitive. We show affordances via signifiers
  • Affordances can be perceptible, or invisible (not shown, but is known), but signifiers must be perceptible -- a plate on a door indicating push (and also used wrong, like a handle on a push door, which is a false signifier)
  • Many affordances and signifiers exist by convention, such as a floppy disc being used for the save icon.

We can use metrics to evaluate a UI - such as the ratio of success to failure, time to complete a task, number of errors a user makes, or the number of times a user expresses frustration (windows 10 settings app) or satisfaction.

Dependability

Introduction

  1. Dependency and Failure
  2. Dependable Processes
Dependability is the trustworthiness of a computer system such that reliance can be justifiably placed on the service it delivers.

That was -- that was more of a word salad than I expected. Dependability is the trustworthiness of a computer system and the services it provides might be better?

Dependency and Failure

  • Computer systems become more crucial, thus they need to be more reliable/dependable.
  • System failures affect more people than a few missing features. Unreliable systems are avoided by users. System failure can also have massive cost to whatever the system was supposed to be doing (data, money, lives, money, security, and most importantly money)
  • Reliability is a measure of how likely a system will work over a set period of time.
  • Perceived Reliability is how it seems to the user. May differ from real reliability since problems more likely with less used functions.
  • We can measure reliability via
    • Probability of failure on demand: failure on request of service
    • Rate of failure occurence: failures per time period
    • Mean time to failure or how long it can keep running, on average
    • Availability: if I request something, chances it's down?
  • Dependability consists of: Availability, Reliability, Safety, Confidentiality, Integrity, and Maintainability.
  • Attributes can be reduced or broken down into others. Not all attributes of dependency is relevant to all systems -- security is less of an issue for a pacemaker than reliability.
  • Other system properties related to dependability are
    • Repairability: how easy is system to repair if it breaks. Are broken components accessible/modifiable easily?
    • Maintainability: Is it economical to add new requirements and updates -- without breaking a lot of stuff.
    • Error tolerance: Designing system to accommodate user errors
  • The fault is the cause of the error, which is the effect, which may cause failure, when that error propagates past what the system/module can handle.
  • One module's failure can be the fault of another module, and so on and so forth until it exceeds the system boundary, and the whole system fails.
  • System failure can be caused by many things, notably hardware failure, software failure (due to design/impl errors), or operational failure (where the human operator makes mistakes)
  • To provide dependability, we can
    • Develop carefully to prevent faults -- fault avoidance
    • Verification and testing to discover and remove faults before push -- fault detection and correction
    • Desigining systems so that faults are handled and the system does not fail -- fault tolerance
    Cost of fixing faults increases exponentioally. If its more cost-effective to release software with faults and fix them later, then it will be done.
  • A system needs to recover, or continue operating in the case of a failure of a component:
    • Graceful degredation enables the system to still operate (perhaps in reduced capacity) when
    • Redundancy is where spare capacity is included in a system in case part of the system fails
    • Diversity making redundnant systems have different implementations, so they're less likely to fail similarly.
  • Software itself is intangible, and fairly cheap to fix. The only problem being when its effects ripple out into the real world.
  • Ideally we want to contain failure and not let it propagate -- this is difficult for the developers though.
  • Failures often come from adding new system requirements.

Dependable Processes

  • Dependable processes are designed to produce dependable software.
  • These processes are
    • Documentable: defined process model that sets out what documentation is needed
    • Standardised: processes applicable for many different systems, and should have standards applicable to all of them.
    • Auditable: understandable by people other than the users, so verification can be done
    • Diverse: include redundant and diverse validation/verification
    • Robust: should be able to recover from failures of individual models.
  • Dependable processes may not be enough to guarantee dependability. If so, system environments need to be designed dependably.
  • Protection systems are special systems parallel to other (usually control) systems. They exist to monitor the control system, and to move it to a safe state on fault.
  • Self monitoring architectures are systems designed to monitor its own operation.
    Each channel should have different hardware and software.
  • N-version programming have multiple software unites (perhaps made by different teams), where each version is executed separately, and then outputs are compared. This has a high cost, and is used where other systems are impractical.
  • All rely on diversity. If we give the same spec to different isolated teams, their implementations may be different, with different bugs and points of failure, reducing chance of overall failure from all systems.
  • Achieving complete diversity is impossible though, as programmers still work similarly at the basic level.

Testing

Introduction

  1. Static Testing
  2. Dynamic Testing
  3. Unit Testing
  4. Component and System Testing, TDD
  5. User Testing
  • Insufficient testing has lead to many examples of massive damages, whether financially or in human life.
  • Testing shows program does what it is meant to, and reveals defects to be fixed before production.
  • Part of verification and validation.
  • Testing can only show presence of bugs, and cannot prove absence of bugs.
  • Some terminology are
    • Verification: does it meet spec?
    • Validation: does it meet needs of customer (which may be not as explicitly mentioned in the spec)?
    • Error: human action that produces incorrect result
    • Failure: software deviates from expected function
    • Defects/Bugs: manifestation of error in software, may cause failure
    • Testing: process of "excersising" software to ensure all requirements are met.
    • Test case: a series of inputs, preconditions, and expected outcomes to ensure compliance with a specific requirement.
    • Reliability: probability that software will not cause system failure for a specified time range and conditions
    • Test plan: record of applying test cases and the reasoning behind those cases
    • System testing: tests both funtional and nonfunctional requirements.

Static Testing

  • Static testing is testing without execution.
  • Involves stuff like code review, walkthroughs, inspections, etc. Allows attributes like quality, compliance, and maintainability to be studied.
  • Essentially verification.
  • Static testing not limited to code: can also statically inspect system design and requirement documents.
  • We do static testing because:
  • Errors hide errors: errors interact, if we just test by running code, we fix one thing at a time, whereas statically we can see something that leads to multiple problems.
  • Code does not need to be complete: we can look at code before it's complete enough to run.
  • Allows quality checking: can also check how well code is written, and whether or not it is up to standard.
  • But naturally only inspection will not do.
  • Inspection is bad at finding unexpected component interactions, timing and performance issues, which is why we need dynamic testing.

Dynamic Testing

  • Dynamic testing is executing code with given test cases. It also involves validation.
  • Code segments must be complete, but we can test on function, class, and system level.
  • Need a test plan and cases, and involves
    • Structural testing: (white box testing), derived from data flow of system
    • Functional testing: (black box testing), derived from formal specification
  • A piece of code can be turned into a Control flow graph
  • Once we have one of those, we can then think about our test cases. Should cover all possibilities, as best as possible, anyway.
  • statement adequacy is when all statements have been executed by at least one test. Statement coverage is then \(\frac{\textrm{number of executed statements}}{\textrm{number of statements}}\).
  • Complient to statement adequacy is branch adequacy and discovery (each decision has a branch), condition coverage (each condition is hit at least once), and thus path coverage (each path is followed).
  • Functional, black box testing, has 6 steps:
    1. Identify what we want software to do
    2. Make input data based on func specifications
    3. Determine what is meant to be returned as output
    4. Execute case
    5. Compare actual and expected results
    6. Check if application works as customer needs
    No internal code is considered.
  • Can apply at different levels:
    • Unit (from module interface spec)
    • Integration (from API, subsystem spec)
    • System (from system requirements)
    • Regressions (check common bugs that we've had in the past)

Unit Testing

  • Testing the methods and objects of our system in isolation.
  • Can be automated.
  • We cannot assume that if a function works in one class, it will work in its subclasses. We need to test everything in the subclasses as well.
  • This leads to a lot of work, which is where automated unit testing libraries come in.
  • All we're doing is initialising a thing with inputs and expected outputs, calling the method, and checking if the outputs match.
  • Writing tests take time, and so every single test must have a purpose, whether that be to verify a behaviour, or to catch a problem.

Component and System Testing, TDD

  • When we start combining components, we must test that everything works in combination. After unit testing, so we know basic functionality works in isolation.
  • Unit testing does not account for interactions between components.
  • Interface errors are very common. They come from interface misuse (a component not passing correct parameters, receiving unexpected returns), interface misunderstanding (one interface does not understand the behaviour of another), and timing errors (in systems which need timing)
  • Component testing should
    • check extremes of ranges of parameters
    • call interfaces with null values
    • testing how failure is handled by deliberately making interfaces fail
    • stress test, especially for message passing systems
    • vary order components access shared memory
  • The goal of system testing is also to check that all components work.
  • However, with system testing, off-the-shelf/premade components may be added.
  • Components from all teams will be put together, and looking for "emergent behaviour".
  • emergent behaviour are characteristics that only appear when components interact.
  • Some emergent behaviour is expected, some is unexpected. We need to test for both,
  • and since the testing is very interaction-focused, use-case testing is effective. Use case testing often involves multiple components and thus causes interactions to occur.
  • Test driven Development (TDD) is when tests are written first for an increment code, and then writing the code to pass the test.
  • TDD came from extreme programming, but is now accepted mainstream.
  • The process goes as thus
    1. Identify small, implementable, functional increment
    2. Write an (ideally automated) test
    3. Run all the tests (it will fail) -- this is important to show that this is a new test for the system
    4. Implement functionality and retest -- may include refactoring old code
    5. Move onto next increment when all tests pass
  • TD is good because
    • It helps clarify what the code is meant to do
    • Writing the test requires strong understanding of functionality
    • Which leads to implementation being easier, since understanding is already there
    But has some drawbacks
    • If you don't know enough to write the tests, you can't develop
    • If certain important scenarios are not included in the tests, this impacts effectiveness of both testing and development
  • We also get
    • High code coverage -- each segment has at least 1 associated test
    • Regression testing -- constantly verifying that new segment has not introduced bugs
    • Simplified debugging -- any bugs must be in the newest segment
    • System documentation -- "tests can double as documentation"
  • TDD is most effective for a new system, building on top of existing systems / reuse based development, it is hard to break things down into testable segments.
  • Multithreading also makes this difficult (beacause consistency)
  • TDD does not replace system testing.

User Testing

  • It is important to give user time with the system to use it, so that it is used in its intended environment.
  • User testing has three approaches:
    • Alpha testing for selected users on very early versions of software
    • Beta testing with a larger group of users with a more complete version
    • Acceptance testing for customers, to decide if the system is ready.
  • In alpha testing, users can identify early issues that the dev team may have missed. Not all requirement features may have been implemented at this stage, only those needed for minimum operation and use.
  • Alpha testing reduces the risk of unanticipated changes disrupting business.
  • Beta testing is done with software that is complete or nearly complete.
  • There's a larger group of users which help find perhaps less common issues, and discovering interactions between system and intended environment.
  • Also useful for marketing and generating.
  • Acceptance testing is crucial for systems built for a client. The client can use their own data, and see if the system is acceptable to them.
    • Define acceptance criteria first of all, before the contract. May be difficult if requirements not in pace. This makes a test criteria.
    • Plan acceptance testing establishing testing schedule and resources, and coverage and order. This makes a test plan.
    • Derive acceptance tests which should be both functional and non-functional. Should cover all requirements. Carefully consider. This makes the tests.
      • Acceptance Tests define a "user journey" -- a series of steps a user might take -- and then its expected output.
    • Run acceptance tests ideally in the delopyment environment. Difficult to automate. Ideally involve users. This gets the test results.
    • Negotiate test results since unlikely that all acceptance tests will pass. Determine if the failures are negligible or important. This makes a test report.
    • Accept or reject system either go back for more development or done.
  • Failed acceptance tests \(\neq\) rejected system. A customer may be willing to accept it even with some minor defects.
  • Or conditional acceptance -- accept on the condition that bugs will be fixed eventually, as part of maintenance.
  • In extreme programming, user involved throughout development, so there is no testing step. Thus developing tests makes them acceptance tests.
  • Best users are "typical users" -- difficult. Make sure tests do not reflect a specific user.
  • Automated tests still don't test interaction, so we have manual testing too.
  • Most companies go for hybrid agile/plan Methodology.